Wait, AI Is An Illusion? And Can’t Think Like A Human?

Wait, AI Is An Illusion? And Can’t Think Like A Human?
TLDR: Behind the media noise about AI taking over human life there’s a reality worth understanding. AI isn't magic, and it's not thinking like a human. Once you understand how it works, you'll see why it's not as scary as it seems and is a really valuable tool for the work you're already doing.

No, the ‘thinking…’ output and thoughtful syntax you see when AI is developing a response to your question does not mean it’s thinking. Really it’s just a mimicked line of code made to replicate some semblance of the human thought process.

So how do text models like ChatGPT, Claude or Gemini work if they aren’t actually consciously thinking and reasoning?

How AI Models Actually Work

Tokens, Not Words

AI works on pattern recognition and token prediction. 

“Tokens” = pieces of meaning that might be words, parts of words or punctuation. 

For example, the word 'darkness' could be split into two tokens 'dark' and 'ness' rather than just being processed as one whole word.

Then based on extensive training on millions of datasets AI predicts the next best word in the sequence. "Given this sequence of tokens, what should logically come next?"

No consciousness or philosophical pondering about how to answer your question. Just sophisticated pattern matching that's become good enough to feel conversational.

Context Window Limitations

Context window is basically AIs short-term memory for how much previous conversation it can use to predict responses. 

Different models have different limits. Some handle a few pages of context, others can process entire documents or multiple documents simultaneously. 

This limitation affects performance. If the context window is short, the AI might lose track of important details in long conversations and become less specific in its response.

A basic example of this is to imagine you're talking to someone who can only remember the last 15 minutes of the conversation.

Why Different Models Vary

ChatGPT, Claude, and Gemini (as examples), often give different answers because each AI system is built with a different underlying architecture. 

This means they each use different rules and methods for weighing the text patterns they learned during training. As a result, different models will focus on different parts of your response when generating an answer.

This leads to slight variations in output across the different models.

Why Understanding This Is Relevant 

The more you strip away AI's mystical and fear-induced nature and view it simply as a powerful pattern recognition machine, the more strategically you can use it as a workflow enhancer, not a workflow replacer.

Here are some good use cases of AI for tendering professionals:

  • Automating repetitive, pattern-heavy tasks like drafting and expanding responses
  • Quick analysis of large text volumes - contract or legislation review
  • Content suggestions based on successful examples 
  • Catching grammar inconsistencies, syntax errors or clunky sentences 
  • An objective second opinion on your work

AI isn't magic or functioning at human-level intelligence. It's just sophisticated pattern recognition following programmed algorithms to predict what comes next based on training data.

The Bottom Line

The competitive advantage goes to professionals who understand what AI does and can use it to streamline their workflow. Not those who fear it and think it functions like a human. AI ‘thinks’ best under your guidance, it needs effective prompts and clear direction to guide optimal responses.