Blog

Tagged by 'ai'

  • Published on
    -
    2 min read

    Learning from Algorithms Instead of People

    What happens when we remove the human from a fact or a piece of information? Does it change how we perceive it? This thought came to mind when I was questioning if community-based sites, such as Stackoverflow are still relevant and made an open-ended remark that Generative AI has now starved us of knowing the person behind the knowledge.

    Historically, we accept knowledge through some form of testimony. We will only believe in something based on what a person has told us. We evaluate their character, their knowledge and most importantly, their honesty. With AI, there is no "person" to trust. You cannot evaluate the AI's moral character or life experience because it has none.

    To demonstrate this point, let's take the following statement about the US economy:

    The stock market is the highest it's ever been. We have the greatest economy in the history of our country.

    If you heard this from Donald Trump (the above statement has been said multiple times by him), you would likely question it immediately. We are familiar with his rhetorical style in how he often bends the truth or prioritises hyperbole over precision. Our scepticism is triggered by the source.

    However, if you asked a financial analyst, you would get a more nuanced response:

    While the market did hit record numbers (which happens naturally due to inflation), the rate of growth was not actually the 'greatest in history'. At the three-year mark, the market was up roughly 45% under Trump, compared to 53% under Obama and 57% under Clinton.

    When we remove the human source, we lose this critical context. By stripping away the "who", we put the accuracy of the "what" in jeopardy. AI operates by taking the insights that required years of research and lived experience, strips them of their author, and repackages them only to regurgitate them with its own bias for our instant consumption. I rarely see the likes of ChatGPT or Gemini offer true attribution to the human behind the data for our own vetting.

    I am far too aware of this from my own experience in building one of my own projects with AI focusing on the stock market and economy, where the data can be subjective and context-dependent. An example of this is when trying to provide the reasoning behind changes in key indices and commodities. The reasoning behind a change in value often hides a dozen competing narratives. When I built my application, I realised that if the AI chooses one narrative over another without telling me why or who championed it, it isn't just summarising the truth; it is effectively editing it.

    Now, I don't want this post to come across negative towards AI, as it would pretty hypocritical after my glowing take on how I use the technology detailed in my previous post, it has just made me more conscious that even though knowledge it presents doesn't necessarily lack meaning, but it might lack soul. We get the answer, but we miss the human condition that made the answer necessary in the first place.

    We have to acknowledge that AI is an incredible tool for gathering information, but it should be the starting point, not the finish. Use it to broaden your search, but go to people to deepen your understanding.

  • The title of this post isn't just a great line from Inception; it's a directive. Eames telling Arthur to expand their constructed reality beyond mere imitation and take bigger risks has been replaying in the back of my mind lately. It felt like the only appropriate way to break the radio silence after such a long hiatus and offer a glimpse into my current mindset. While I haven't been navigating multiple levels of a subconscious dream state, this past year has been about breaking free from self-imposed limitations. I've been pushing beyond my day-to-day coding endeavors to invest time into the very thing dominating our headlines: Artificial Intelligence!

    It is a technology moving at such breakneck speed that you can't just dip a toe in; you have to dive in headfirst and swim, trusting that you'll emerge on the other side a wiser man. Failing to observe the shift in an industry like mine, in my view, is career suicide. With platforms and services releasing their own form of AI tools—some I deem more successful than others—I needed to find my own way in. As programmers, we can no longer afford the luxury of being so tunnel-visioned, clinging rigidly to our area of expertise while the landscape changes around us.

    The thought of getting any footing into the world of AI filled me with dread. This could be down to setting the bar of expectation too high. I knew I was never going to be the type of person to build some deep learning AI engine from scratch, as you really need the "street smarts" of an AI Engineer to do that. Instead, learning to use AI tools and frameworks already readily available would give me the step up I needed, such as Machine Learning and APIs provided by ChatGPT and Gemini.

    The Journey To Discovery

    My journey began not with complex neural networks, but with the fundamentals of machine learning (via ML.NET). It was a learning curve, requiring me to rethink how I approached problem-solving. But as the concepts started to click, the potential for a specific use case suddenly became undeniable. I started small, experimenting with a simple concept that could be of tangible value, where I could predict future pricing of used cars based on historical data and their individual attributes.

    Not too far along from this, I started working on my very own side-project in another area I am very passionate about: stocks and trading. I developed a website called Stockmantics that would take in the day's stock and trading news to produce daily digest in a format that was beneficial to me. My own one-stop shop for the day's trading news, without having to read many different newsletters as I had done previously. I used AI as a way to assist in my own needs that could also help others. It's a beast of a project that I am incredibly proud of, and I plan to do a write-up on it next year. But for now, suffice it to say that it taught me more about the practical pipelines of AI than any tutorial ever could.

    One of the final AI projects I worked on at the tail end of the year was a proof-of-concept that revolved around vision search. I wanted to see if I could build a system capable of scanning a client's database to find visually similar items based on nothing but an uploaded image, with the ability to detect what the image consisted of. The addition of metadata attribution working alongside the image search resulted in accurate results that surpassed my own expectations.

    If Asimov had his Three Laws to govern the behaviour of robots, I had my three specific applications, each being a critical stepping stone that would shape my understanding as to where I could integrate AI and the future possibilities—endless? Rather than just being the end user, I was building something of my own creation. I was able to see AI through a different perspective, which resulted in a newfound appreciation. It ended up being a really rewarding experience that has been far from what I am normally used to developing, and this is just the start.

    Final Thoughts

    I've come to view AI not as a competitor, or a full human replacement, but as a tireless, low-cost assistant ready to help take the smallest seed of an idea and grow it into a tangible reality, at a speed I never thought possible. It bridges the gap between theory and fruition, allowing me to truly dream a little bigger.