Blog

Blogging on programming and life in general.

  • In my previous post discussing my foray into the world of AI, I mentioned working on a personal project called "Stockmantics". But what exactly is Stockmantics, and why did I decide to build it?

    Stockmantics started because I needed a project where I could apply my AI knowledge to a real-world problem. In the end, I didn't have to look further than my own hobbies.

    Aside from coding, I’ve become heavily invested (pun intended) in the stock market. It all started shortly after COVID, when there was so much buzz online about people putting money into companies and index funds. Seeing the returns made by those who invested at the right time (during the lockdown of March 2020) opened my eyes to a potential new income stream. I didn't want to miss out on the fun, so I decided to learn the ropes of an area I knew nothing about. I just didn't expect it to turn into a full-time hobby.

    However, unlike most hobbies, I considered this one fraught with danger; one must err on the side of caution. After all, real money is at stake, and acting foolhardy or investing incorrectly can lead to significant losses.

    The Requirement

    When I became more confident in my investment strategy and the type of trader I wanted to be, I found one aspect consistently time-consuming: finding an easy-to-read daily digest in one place. I was tired of hopping from website to website or subscribing to endless newsletters just to get a clear picture.

    So, with the help of AI, I decided to build a tool that would do this for me, and Stockmantics was born. My requirements were as follows:

    • Market Snapshot: A quick look at key indices (S&P 500, FTSE 100, NASDAQ, Commodities, etc.).
    • Daily Summary: A single, concise sentence summarising what happened that day.
    • Global News: Key events from the USA, Europe, and Asia.
    • Crypto Updates: High-level developments in cryptocurrency, focusing on the majors.
    • Investor Action: A conclusion based on the day's news, suggesting what an investor should look out for.
    • Smart Glossary: Tooltipped definitions for stock market, investment, and economic terms to assist novice investors (and provide a constant refresher for myself).
    • Social-Media Integration: Automatic posting to X, highlighting key stories from the day's article.

    My philosophy for this personal project is simple: if it assists my own needs, that is a big win in itself. If someone else finds my method of digesting the day's financial news useful, that will be the icing on the cake. I decided early on that the success of Stockmantics would not be measured by visitor numbers or X followers, but by what I learnt during the development process and whether it truly works for me.

    Application Architecture

    The application architecture is based on the following Microsoft technologies:

    ASP.NET Core Razor Pages

    The website is a relatively small and simple application that consisted of the following pages:

    1. Homepage
    2. Article Listing
    3. Article
    4. Generic Content (for About/Terms/Disclaimer pages)

    A CMS wasn't needed as all content and data would be served from Azure Storage Tables. All there is from a content-management perspective is an authenticated "Article Management" area, where content generated by Gemini could be overridden when required.

    Azure Storage Tables

    I actively decided to use Azure Storage Tables over an SQL database to store all of the Stockmantics data as there was no relational element between each table. It also provided a lower cost alternative and quicker route to development.

    List of tables:

    • Article
    • MarketSnapshot
    • SocialShare
    • StockmarketGlossary
    • AppSetting

    Azure Blob

    For images that may be used in article content.

    Azure Functions

    All the grunt work getting the data is done by Timer Triggered Azure Functions that would fire shortly after the US Markets open (around midday GMT) in order to get the most up-to-date goings on in the market.

    A breakdown of the Azure Functions are as follows:

    • Generate News Article - queries stock market API's and news feeds to send to the Gemini API to construct an article tailored to my requirements. It is then stored in the Article table with related attributes and additional meta data suited to be served in a webpage.
    • Generate Social Posts - extracts 10 key facts from the generated news article to be transformed into tweets. The days generated tweets are stored until pushed to social media platforms.
    • Market Snapshot - uses the Yahoo Finance API to return the market price and percentage change for the core market indices. These values are then passed to the Gemini APIs "Grounding with Google Search" to provide sentiment and the reasons behind the change in price.
    • Post To X - publishes a tweet every 15 minutes.
    • Post To Bluesky - publishes a post every 15 minutes.

    The Chosen AI Engine

    It was always going to be a choice between Google Gemini and OpenAI. I was already familiar with both LLMs (Large Language Models), having casually thrown stock market queries at them—among other things—long before this project was even a glint in my eye. Ultimately, my decision hinged on two key factors:

    1. API: The ease of use and the reliability of the endpoints in returning structured data.
    2. Cost Factor: Being unfamiliar with the specific pricing structures of LLMs, I needed to estimate the cost per API call and project my monthly expenditure based on token usage. The OpenAI GPT API Pricing Calculator provided an excellent breakdown of costs across all major AI providers.

    I concluded that Google Gemini was the best fit for Stockmantics, primarily because the model I intended to use (gemini-2.5-flash) offered the most competitive pricing. The cost for one million input and output tokens works out to approximately $0.37, compared to OpenAI's $2.00.

    Furthermore, I felt that Gemini held a slight edge over OpenAI. They might have been late to the AI party, but they have certainly made up for lost time with impressive speed. It also had a card up its sleeve that I only discovered during development: Grounding with Google Search. This feature allows the model to access real-time information from the web, ensuring that the data returned is current rather than limited to a training cut-off date.

    Misjudging the Machine: Data is King!

    I initially was under the impression that I could simply ask the likes of OpenAI or Gemini to collate the day's stock market news, which I could then format to my liking. However, this proved to be a mistake. When dealing with fast-moving financial news, I found the results hit-and-miss. The models would frequently return information that was out of date or cite entirely incorrect market prices (even when using Grounding with Google Search).

    At this point, I realised I needed to take a step back and reassess my approach. It became clear that without a reliable, accurate data feed, this application would be of no use to man nor beast.

    The solution had to start with raw data, which the LLM could then use as its base to expand upon. For this, I found pulling financial data available through the likes of Yahoo Finance feeds to be invaluable, amongst other finance-related news feeds.

    Lengthy Vetting Period

    The transition from a proof-of-concept to the final version of Stockmantics required a lengthy vetting period, which continued weeks after releasing to live. The raw output from the LLM was rarely perfect on the first try, leading to a many iteratation of refinement. My focus was on four key areas:

    • Structure & Flow: Tweaking the system instructions to ensure the output was digestible, preventing the model from generating dense, unreadable paragraphs.
    • Sector Balance: Ensuring the article provided a holistic view of the market, rather than fixating solely on volatile tech stocks or the "Magnificent Seven".
    • Glossary Precision: Fine-tuning the tooltips to provide definitions that were accessible to novices without losing technical accuracy.
    • Geopolitical Neutrality: Ensuring that reports on world affairs, which often drive market sentiment were delivered with an objective and balanced tone.

    What I learnt from this process is that while anyone can write a basic AI prompt, getting the granular nuances right takes a significant amount of time. It is less about coding and more about the art of communication; you have to learn how to speak the model's language to get the consistent, high-quality output you need. Even now, I find myself still making ongoing tweaks for further improvement.

    If you compare the very first article published against one the more recent, I am hoping a vast difference will be noticed.

    Breakdown of Costs

    One of my main priorities was to keep the running costs on this project tight and I think things ended up being quite good on value. Here is a monthly breakdown:

    1. Website and Domain: £6.25
    2. Azure Services (Functions/Blob Storage/Tables): £1.10
    3. Google Gemini API: £4.00

    So we're looking at around £11.35 in total monthly costs. Not bad. Google Gemini costs will be the only item that I expect to fluctute based on the varied number of tokens utilised for each daily article.

    NOTE: Google Gemini and Azure services are only used weekdays for when the stock markets are open. So the costs are based on a 5 day week.

    Conclusion

    I am unsure what the long-term future holds for Stockmantics. Its lifespan ultimately depends on ongoing costs, maintenance effort, and whether I continue to find it useful for my own needs. However, for now, it serves a valuable purpose beyond just financial news: I have a robust, live application that acts as the perfect test bed for experimenting with new AI features and expanding my technical skillset.

    Fortunately, thanks to various architectural decisions and efficiency improvements, the running costs are currently sustainable, and the site itself is very low maintenance—touch wood! I foresee that further development will only be required if the external APIs change. I have already paid for a years worth of web hosting until October 2026 and will reassess things closer to that date.

    If you got this far, thank you for taking the time to read through the development process. If you are interested in seeing the final result, you can find all the links to Stockmantics below:

  • Published on
    -
    2 min read

    Learning from Algorithms Instead of People

    What happens when we remove the human from a fact or a piece of information? Does it change how we perceive it? This thought came to mind when I was questioning if community-based sites, such as Stackoverflow are still relevant and made an open-ended remark that Generative AI has now starved us of knowing the person behind the knowledge.

    Historically, we accept knowledge through some form of testimony. We will only believe in something based on what a person has told us. We evaluate their character, their knowledge and most importantly, their honesty. With AI, there is no "person" to trust. You cannot evaluate the AI's moral character or life experience because it has none.

    To demonstrate this point, let's take the following statement about the US economy:

    The stock market is the highest it's ever been. We have the greatest economy in the history of our country.

    If you heard this from Donald Trump (the above statement has been said multiple times by him), you would likely question it immediately. We are familiar with his rhetorical style in how he often bends the truth or prioritises hyperbole over precision. Our scepticism is triggered by the source.

    However, if you asked a financial analyst, you would get a more nuanced response:

    While the market did hit record numbers (which happens naturally due to inflation), the rate of growth was not actually the 'greatest in history'. At the three-year mark, the market was up roughly 45% under Trump, compared to 53% under Obama and 57% under Clinton.

    When we remove the human source, we lose this critical context. By stripping away the "who", we put the accuracy of the "what" in jeopardy. AI operates by taking the insights that required years of research and lived experience, strips them of their author, and repackages them only to regurgitate them with its own bias for our instant consumption. I rarely see the likes of ChatGPT or Gemini offer true attribution to the human behind the data for our own vetting.

    I am far too aware of this from my own experience in building one of my own projects with AI focusing on the stock market and economy, where the data can be subjective and context-dependent. An example of this is when trying to provide the reasoning behind changes in key indices and commodities. The reasoning behind a change in value often hides a dozen competing narratives. When I built my application, I realised that if the AI chooses one narrative over another without telling me why or who championed it, it isn't just summarising the truth; it is effectively editing it.

    Now, I don't want this post to come across negative towards AI, as it would pretty hypocritical after my glowing take on how I use the technology detailed in my previous post, it has just made me more conscious that even though knowledge it presents doesn't necessarily lack meaning, but it might lack soul. We get the answer, but we miss the human condition that made the answer necessary in the first place.

    We have to acknowledge that AI is an incredible tool for gathering information, but it should be the starting point, not the finish. Use it to broaden your search, but go to people to deepen your understanding.

  • The title of this post isn't just a great line from Inception; it's a directive. Eames telling Arthur to expand their constructed reality beyond mere imitation and take bigger risks has been replaying in the back of my mind lately. It felt like the only appropriate way to break the radio silence after such a long hiatus and offer a glimpse into my current mindset. While I haven't been navigating multiple levels of a subconscious dream state, this past year has been about breaking free from self-imposed limitations. I've been pushing beyond my day-to-day coding endeavors to invest time into the very thing dominating our headlines: Artificial Intelligence!

    It is a technology moving at such breakneck speed that you can't just dip a toe in; you have to dive in headfirst and swim, trusting that you'll emerge on the other side a wiser man. Failing to observe the shift in an industry like mine, in my view, is career suicide. With platforms and services releasing their own form of AI tools—some I deem more successful than others—I needed to find my own way in. As programmers, we can no longer afford the luxury of being so tunnel-visioned, clinging rigidly to our area of expertise while the landscape changes around us.

    The thought of getting any footing into the world of AI filled me with dread. This could be down to setting the bar of expectation too high. I knew I was never going to be the type of person to build some deep learning AI engine from scratch, as you really need the "street smarts" of an AI Engineer to do that. Instead, learning to use AI tools and frameworks already readily available would give me the step up I needed, such as Machine Learning and APIs provided by ChatGPT and Gemini.

    The Journey To Discovery

    My journey began not with complex neural networks, but with the fundamentals of machine learning (via ML.NET). It was a learning curve, requiring me to rethink how I approached problem-solving. But as the concepts started to click, the potential for a specific use case suddenly became undeniable. I started small, experimenting with a simple concept that could be of tangible value, where I could predict future pricing of used cars based on historical data and their individual attributes.

    Not too far along from this, I started working on my very own side-project in another area I am very passionate about: stocks and trading. I developed a website called Stockmantics that would take in the day's stock and trading news to produce daily digest in a format that was beneficial to me. My own one-stop shop for the day's trading news, without having to read many different newsletters as I had done previously. I used AI as a way to assist in my own needs that could also help others. It's a beast of a project that I am incredibly proud of, and I plan to do a write-up on it next year. But for now, suffice it to say that it taught me more about the practical pipelines of AI than any tutorial ever could.

    One of the final AI projects I worked on at the tail end of the year was a proof-of-concept that revolved around vision search. I wanted to see if I could build a system capable of scanning a client's database to find visually similar items based on nothing but an uploaded image, with the ability to detect what the image consisted of. The addition of metadata attribution working alongside the image search resulted in accurate results that surpassed my own expectations.

    If Asimov had his Three Laws to govern the behaviour of robots, I had my three specific applications, each being a critical stepping stone that would shape my understanding as to where I could integrate AI and the future possibilities—endless? Rather than just being the end user, I was building something of my own creation. I was able to see AI through a different perspective, which resulted in a newfound appreciation. It ended up being a really rewarding experience that has been far from what I am normally used to developing, and this is just the start.

    Final Thoughts

    I've come to view AI not as a competitor, or a full human replacement, but as a tireless, low-cost assistant ready to help take the smallest seed of an idea and grow it into a tangible reality, at a speed I never thought possible. It bridges the gap between theory and fruition, allowing me to truly dream a little bigger.

  • I've been using the gatsby-plugin-smoothscroll plugin in the majority of GatsbyJS builds to provide a nice smooth scrolling effect to a HTML element on a page. Unfortunately, it lacked the capability of providing an offset scroll to position, which is useful when a site has a fixed header or navigation.

    I decided to take the gatsby-plugin-smoothscroll plugin and simplify it so that it would not require a dependency on polyfilled smooth scrolling as this is native to most modern browsers. The plugin just contains a helper function that can be added to any onClick event with or without an offset parameter.

    Usage

    The plugin contains a smoothScrollTo helper function that can be imported onto the page:

    // This could be in your `pages/index.js` file.
    
    import smoothScrollTo from "gatsby-plugin-smoothscroll-offset";
    

    The smoothScrollTo function can then be used within an onClick event handler:

    <!-- Without offset -->
    <button onClick={() => smoothScrollTo("#some-id")}>My link without offset</button>
    
    <!-- With offset of 80px -->
    <button onClick={() => smoothScrollTo("#some-id", 80)}>My link with offset</button>
    

    Demo

    A demonstration of the plugin in use can be found by navigating to my Blog Archive page and clicking on any of the category links.

    Prior to this plugin, the category list header would be covered by the sticky navigation.

    Smooth Scrolling without Offset

    Now that an offset of 80px can be set, the category list header is now visible.

    Smooth Scrolling with Offset

    Links

  • I woke up yesterday morning to a serendipitous discovery that all my stock positions had successfully been transferred from Freetrade to Trading 212. There really is nothing more rewarding than seeing all investments under one stockbroker with a nice five-figure number staring back at you.

    Since I started investing in stocks at the start of 2022, the only stock broker app that was available to me was Freetrade and it made my introduction to making investments into hand-picked stocks very straight-forward. But as my portfolio grew, so did my requirements and when Trading 212 opened its doors to new sign-ups (after being on a very long waiting list), I decided to see if the grass was truly greener on the other side... and it was.

    Trading 212 had what Freetrade didn't:

    • An active community of like-minded users commenting on their views and insights against each stock.
    • 5.2% (as of today 5.17%) interest on held cash.
    • Introduction of a Cash ISA.
    • Ability to view stock graphs in detailed view with the ability to annotate specific trendlines.
    • Free use of the use of a Stocks and Shares ISA.
    • Lower FX rates.
    • Fractional shares on ETFs.

    Unfortunately for Freetrade, I just couldn't see a future where they could provide the features I needed in addition to free use of the service. I was being charged £60 per year for the privilege of a Stocks and Shares ISA - free on Trading 212.

    Even though I explored Trading 212 when it became available last year, I made a decision to only start investing at the start of the 2024 tax year to avoid any ISA-related tax implications by utilising two Stocks and Shares (S&S) ISAs. This is now a void issue as you are able to invest in two different S&S ISAs as long as you do not exceed the yearly £20k limit.

    Planning The Move

    I am currently seven months into using Trading 212 for investing but it was only until October I felt I was in a position to transfer all my stock holding from Freetrade. Why such a long wait?

    The wait was primarily due to not really understanding the correct route to transferring my portfolio without eating into my current years tax free allocation, whilst retaining the average stock price per holding. I also had concerns over the large sum of money to transfer and it's something that shouldn't be taken lightly.

    I am hoping this post will provide some clarity through my experience in transferring my portfolio to Trading 212, even if it is tailored more towards what I experienced in moving away from Freetrade.

    In-Specie Transfer

    In-specie wasn't a term I was familiar with prior to researching how I could move my stock portfolio to another platform.

    'In specie' is a Latin term meaning 'in the actual form'. Transferring an asset 'in specie' means to transfer the ownership of that asset from one person/company/entity to another person/company/entity in its current form, that is without the need to convert the asset to cash.

    Before in-specie transfers, the only way to move from one stock broker to another was to sell all your holdings as cash to then reinvest again within the new brokerage. The main disadvantages of doing this is:

    • Time out of the market creating more risk to price fluctuations.
    • Potential loss due to the difference between the sell and buy prices.
    • Additional brokerage fees when repurchasing the same assets with a new provider.
    • Loss of tax efficiency if you have a large portfolio that might wipe out or exceed the yearly tax-free allocation.
    • Missed dividend payouts.
    • Taking losses on selling stocks that haven't made a profit.

    I've noticed over the last couple of years in-specie transfers have become more universally supported amongst the smaller stock brokers (the ones you and I are more likely to use) such as Freetrade, Trading 212 and InvestEngine, which makes moving from one platform to another a much simpler process.

    Even though the process has become simpler, it is still a time-consuming process as transfer completion can take anywhere between 4-6 weeks based on the coordination between both stock platforms.

    My In-Specie Transfer Timeline

    My own in-specie transfer had taken a little longer than I hoped - around six weeks with the key milestones dated below.

    12/10/24

    Initiated the transfer process in Trading 212 by selecting the stocks I wanted to transfer. You can select specific stocks or your whole portfolio. I based my transfer on selecting all my holdings and specifying the average stock price as I want to retain my position.

    23/10/24

    Freetrade emailed to confirm a transfer request has been received and to confirm that my portfolio is in order to allow the process to move smoothly, which entailed:

    • Adding £17 fee per each US holding in my account.
    • Rounding up any fractional shares. - Shares in their fractional state cannot be transferred. For one of my stock holdings, I decided to purchase slightly more and round up the total value rather than sell down as this stock in particular is in the negative.

    12/11/24

    Three weeks had passed and I hadn't heard anything from either party. I contacted Trading 212 support to report a delay in transfer and if any reason could be provided for this. I didn't get a reply, but the next day, things started ticking along. Maybe this gave them the 'kick' they needed?

    13/11/24

    Trading 212 completed arrangements with Freetrade and they were now in a position to start the actual transfer that will be over the course of a two week period.

    21/11/24

    I woke up to find all stocks had been transferred whilst maintaining my average stock price. There is still one minor job awaiting completion: transfer of a small amount of cash. The most important job had been done and I could now rest easy.

    Next steps

    Once the small amount of cash has been transferred, I plan on cancelling my yearly Freetrade Standard plan expiring in June 2025. By the time the transfer has been completed, I will have an outstanding 6 months left on my subscription that I can get refunded (minus a £5 admin fee).

  • When developing custom forms in Umbraco using ASP.NET Core’s Tag Helpers and DataAnnotations, I noticed that display names and validation messages weren’t being rendered for any of the fields.

    [Required(ErrorMessage = "The 'First Name' field is required.")]
    [Display(Name = "First Name")]
    public string? FirstName { get; set; }
    
    [Required(ErrorMessage = "The 'Last Name' field is required.")]
    [Display(Name = "Last Name")]
    public string? LastName { get; set; }
    
    [Required(ErrorMessage = "The 'Email Address' field is required.")]
    [Display(Name = "Email Address")]
    public string? EmailAddress { get; set; }
    
    

    This was quite an odd issue that (if I'm honest!) took me quite some time to resolve as I followed my usual approach to building forms — an approach I’ve used many times in Umbraco without any issues. The only difference in this instance was that I was using an Umbraco form wrapper.

    @using (Html.BeginUmbracoForm<ContactFormController>("Submit"))
    {
        <fieldset>
            <!-- Form fields here -->
        </fieldset>
    }
    

    I must have been sitting under a rock as I have never come across this from the years working in Umbraco. It could be down to the fact that the forms I have developed in the past didn't rely so heavily on .NET's DataAnnotation attributes.

    The only solution available to remedy this problem was to install a Nuget package (currently in beta) that has kindly been created by Dryfort.com, which resolves the display name and validation attributes for in-form rendering.

    The Nuget package works in Umbraco 10 onwards. I've personally used it in version 13 without any problem. Until there is an official Umbraco fix, this does the job nicely and highly recommended if you encounter similar issues.

  • As someone who specializes in integrations, I’m hardly ever surprised when I come across yet another CRM platform I’ve never heard of. It feels like there are almost as many CRMs out there as stars in the night sky — okay, maybe that's a bit of an exaggeration, but you get the idea.

    I was introduced to another platform while working on a small integration project: Nexudus. Nexudus is a comprehensive system designed specifically for managing coworking spaces, shared workspaces and flexible offices, whilst incorporating the features you’d expect from a customer relationship management platform.

    For one part of this integration, newsletter subscribers needed to be stored in Nexudus through a statically-generated site built on Astro, hosted in Netlify. The only way to pass subscriber data to Nexudus is through their API platform, which posed an opportunity to build this integration using Netlify serverless functions.

    The Newsletter Subscriber API documentation provides a good starting point for sending through subscriber details and assigning to specific newsletter groups. However, one issue arose during integration whereby the endpoint would error if a user was already subscribed within Nexudus, even if it was a subscription for different group.

    It would seem how Nexudus deals with existing subscribers will require a separate update process, as just using the Add Newsletter API endpoint alone does not take into consideration changes to subscription groups. It would be more straight-forward if the Mailchimp API approach was taken, whereby the same user email address can be assigned to multiple mailing lists through a single API endpoint.

    When developing the Netlify serverless function, I put in additional steps to that will allow existing subscribers to be added to new subscription groups through the following process:

    1. Look up the subscriber by email address.
    2. If a subscriber is not found, a new record is created.
    3. If a subscriber is found, update the existing record by passing through any changed values by the record ID.
    4. For an updated record, the new group ID will need to be sent along with the group ID's the user is already assigned to.

    A Github repository has been created containing the aforementioned functionality that can be found here: nexudus-netlify-functions. I may add other Nexudus API endpoints that I have been working on to this repo going forward.

  • In a world filled with technological innovation that fulfils the majority of one's every need, one can sometimes end up feeling all too sterile, especially around the creative-led tasks that should invoke something more visceral.

    It’s only a matter of time before many of us start to feel a void from relying on multifunctional devices that have become deeply intertwined with every part of our lives. Loosening even a small thread of this technological dependence can bring a profound sense of focus.

    One aspect I felt I had to change was my approach to writing as I started getting the feeling that the process was becoming all too sterile and monotonous. I had the urge to go back to a more tactile method of publishing content by starting the process with good old-fashioned pen and paper.

    One thing that became noticeably apparent when returning to this method of curating content is that the real world is far less forgiving, requiring the brain to relearn how to organise thoughts for long-form writing. In the early stages of drafting blog posts by hand, my pages were cluttered with crossed-out sentences and scribbled words. It became evident that I was really reliant on the forgiving nature of writing apps where blocks of text could easily be moved around.

    However, with each blog post I wrote by hand, my brain has managed to think further ahead when it previously lacked forethought where I regularly experienced writer's block. The posts I've published throughout September have all been curated by initially compiling a basic outline, which is then expanded upon into a longer form on paper first. This is probably how I managed to increase my output during the month. I can only attribute this to the lack of visual distractions creating a more kinesthetic environment for thoughts to gestate.

    My approach to writing has changed over the years since I have been blogging and I am reminded of how I used to assimilate ideas from a post I wrote back in 2015: Pen + Paper = Productivity. It is here where I said something profound that has been lost on me:

    Paper has no fixed structure that you are forced to conform to, which makes processing your own thoughts very easy. Unfortunately, software for note-taking has not advanced nearly as fast. It's still all too linear and fixed.

    It's been nine years since that post was written, and while technology has advance to the point of offering the convenience of writing on tablets, which I’ve done for a while using my own Apple iPad and Apple Pencil — it simply doesn’t compare. No matter how much we try to mimic the experience with "paperlike" screen protectors.

    Even though technology helps us accomplish things faster, it comes at the cost of not being in the moment. Sometimes, the journey is more meaningful than the destination, and we don’t always need to rely on technology simply because it’s there.

    Does going back to basics make the publishing process longer? Surprisingly, not as much as you’d think. I was pleasantly surprised to discover that after everything is written down on paper, the final steps are mostly mechanical — typing it up on my laptop, running a spell and grammar check, adding an image, and finally hitting the publish button.

    When handwriting long-form content, the process needs to be as easy and frictionless as possible by investing in a good quality writing instrument. To quote Emmert Wolf: An artist is only as good as his tools. Using a better pen has encouraged me to write more, especially compared to the fatigue I felt with a Bic Crystal, which I find more suited to casual note-taking.

    Conclusion

    Who knows, maybe this new approach will even improve the overall legibility of my handwriting — it really has deteriorated since I left school. Most likely the result of many years of programming. I don't think I will ever stop relying on my wife to write birthday and greeting cards anytime soon.

    I’d forgotten just how satisfying the experience of handwriting blog posts can be. It’s a bit like channelling the spirit of Bob Ross, layering words like brushstrokes that gradually form paragraphs into passages. When you're done, you can sit back and admire the canvas of carefully crafted marks you’ve created.

  • At times there is need to get a list of files that have been updated. This could for the following reasons:

    • Audit compliance to maintain records of application changes.
    • Backup verification to confirm the right files were backed up.
    • Verification of changed files to confirm which files were added, modified, or deleted during an update.
    • Security checks to ensure that there have been no unauthorised or suspicious files changed or installed through hacking.
    • Troubleshooting Issues after a new application release by seeing a list of changed files can help identify the source of issues.

    Based on the information I found online, I put together a PowerShell script that was flexible enough to meet the needs of the above scenarios, as I encountered one of them this week. I'll let you guess the scenario I faced.

    At its core, the following PowerShell script uses the Get-ChildItem command to list out all files recursively across all sub-folders, ordered by the created date descending with the addition of handful of optional parameters.

    Get-ChildItem -Path C:\My-Path -Recurse -Include *.png | 
    			Select -Last 5 CreationTime,LastWriteTime,FullName | 
    			Sort-Object -Property CreationTime -Descending | 
    			Export-Csv "file-list.csv"
    

    Breakdown of the parameters used:

    Parameter/Object Detail Is Optional
    -Path The folder path to where files need to be listed. No
    -Recurse Get files from the path and its subdirectories Yes
    -Include Filter the file output through a path element or pattern,. This only works when the "Recurse" parameter is present. Yes
    Select Set the maximum output (-Last) and list of fields to be listed. Yes
    Sort-Object Specify field and sort order. Yes
    Export-Csv Export the list of files list to a CSV. Yes

    If the files need to be sorted by last modified date, the Sort-Object property needs to be set to "LastWriteTime".

    When the script is run, you'll see the results rendered in the following way:

    CreationTime        LastWriteTime       FullName
    ------------        -------------       --------
    25/05/2023 20:33:44 25/05/2023 20:33:44 X:\Downloads\synology\Screenshot 2023-05-25 at 20.33.38.png
    16/05/2023 14:18:21 16/05/2023 14:18:21 X:\Downloads\synology\Screenshot 2023-05-16 at 14.18.15.png
    

    Further Information

  • I've been working with custom functionality for registering and authenticating external site users in Umbraco 13 using its Members feature.

    A custom Member Type was created so I could create field properties to specifically store all member registeration data. This consisted of Textboxes, Textareas and Dropdown fields.

    Getting values for fields in code is very straight-forward, but I encountered issues in when dealing with fields that consist of preset values, such as a Dropdown list of titles (Mr/Mrs/Ms/etc).

    Based on the Umbraco documentation for working with a Dropdown field, I should be able to get the selected value through this one line of code:

    @if (Model.HasValue("title"))
    {
        <p>@(Model.Value<string>("title"))</p>
    }
    

    When working with custom properties from a Member Type, the approach seems to be different. A GetValue() is the only accessor we have available to us to output a value - something we are already accustomed to working in Umbraco.

    IMember? member = memberService.GetByEmail("johndoe@gmail.com");
    string title = member.Properties["title"].GetValue()?.ToString(); // Output: "[\"Mr\"]"
    

    However, the value is returned as a serialized array. This is also the case when using the typed GetValue() accessor on the property:

    IMember? member = memberService.GetByEmail("johndoe@gmail.com");
    string title = member.GetValue<string>("title"); // Output: "[\"Mr\"]"
    

    Umbraco 13 - Dropdown Value From Custom Member Type Property

    The only way to get around this was to create a custom extension method to deserialize the string array so the value alone could be output:

    public static class MemberPropertyExtensions
    {
        /// <summary>
        /// Gets the selected value of a Dropdown property.
        /// </summary>
        /// <param name="property"></param>
        /// <returns></returns>
        public static string? GetSelectedDropdownValue(this IProperty property)
        {
            if (property == null)
                return string.Empty;
    
            string? value = property?.GetValue()?.ToString();
    
            if (string.IsNullOrEmpty(value))
                return string.Empty;
    
            string[]? propertyArray = JsonConvert.DeserializeObject<string[]>(value);
    
            return propertyArray?.FirstOrDefault();
        }
    }
    

    It's a simple but effective solution. Now our original code can be updated by adding our newly created GetSelectedDropdownValue() method to the property:

    IMember? member = memberService.GetByEmail("johndoe@gmail.com");
    string title = member.Properties["title"].GetSelectedDropdownValue();
    

    Useful Information