What happens when we remove the human from a fact or a piece of information? Does it change how we perceive it? This thought came to mind when I was questioning if community-based sites, such as Stackoverflow are still relevant and made an open-ended remark that Generative AI has now starved us of knowing the person behind the knowledge.
Historically, we accept knowledge through some form of testimony. We will only believe in something based on what a person has told us. We evaluate their character, their knowledge and most importantly, their honesty. With AI, there is no "person" to trust. You cannot evaluate the AI's moral character or life experience because it has none.
To demonstrate this point, let's take the following statement about the US economy:
The stock market is the highest it's ever been. We have the greatest economy in the history of our country.
If you heard this from Donald Trump (the above statement has been said multiple times by him), you would likely question it immediately. We are familiar with his rhetorical style in how he often bends the truth or prioritises hyperbole over precision. Our scepticism is triggered by the source.
However, if you asked a financial analyst, you would get a more nuanced response:
While the market did hit record numbers (which happens naturally due to inflation), the rate of growth was not actually the 'greatest in history'. At the three-year mark, the market was up roughly 45% under Trump, compared to 53% under Obama and 57% under Clinton.
When we remove the human source, we lose this critical context. By stripping away the "who", we put the accuracy of the "what" in jeopardy. AI operates by taking the insights that required years of research and lived experience, strips them of their author, and repackages them only to regurgitate them with its own bias for our instant consumption. I rarely see the likes of ChatGPT or Gemini offer true attribution to the human behind the data for our own vetting.
I am far too aware of this from my own experience in building one of my own projects with AI focusing on the stock market and economy, where the data can be subjective and context-dependent. An example of this is when trying to provide the reasoning behind changes in key indices and commodities. The reasoning behind a change in value often hides a dozen competing narratives. When I built my application, I realised that if the AI chooses one narrative over another without telling me why or who championed it, it isn't just summarising the truth; it is effectively editing it.
Now, I don't want this post to come across negative towards AI, as it would pretty hypocritical after my glowing take on how I use the technology detailed in my previous post, it has just made me more conscious that even though knowledge it presents doesn't necessarily lack meaning, but it might lack soul. We get the answer, but we miss the human condition that made the answer necessary in the first place.
We have to acknowledge that AI is an incredible tool for gathering information, but it should be the starting point, not the finish. Use it to broaden your search, but go to people to deepen your understanding.






