Good afternoon, London. It's partly cloudy today, yet let's not dwell on the weather. Instead, let's turn our focus to something far more pressing: the landscape of artificial intelligence. Alan Turing once noted "The majority of mathematical arguments even when they are correct are not rigorous in the sense in which this is meant today." In a world overwhelmed with artificial intelligence buzzwords and flashy headlines Turing's insight serves as a reminder of our mission here: to cut through the noise and bring clarity to this complex field. Tired of drowning in artificial intelligence headlines? Ready for clarity, insight, and a direct line to the pulse of innovation? Welcome to Turing's Torch: artificial intelligence Weekly! I'm Jonathan Harris your host and I'm cutting through the noise to bring you the most critical artificial intelligence developments explained analysed and delivered straight to you. Let's ignite your understanding of artificial intelligence, together. Right then, let's have a look at what's been happening in the world of artificial intelligence. We're seeing some interesting shifts in the geography of artificial intelligence, for a start. The Association for the Advancement of Artificial Intelligence or AAAI – a rather grand name for a conference – is holding its annual shindig in Singapore in 2026. Now this might not sound like headline news yet it's the first time in forty years that this particular gathering of academics researchers and industry types has ventured outside North America. Think of it as Davos, yet with algorithms instead of politicians. Singapore, of course, is a tech hub, throwing money at artificial intelligence like confetti. As a result, in one sense, it's not a shock they've been chosen. Yet what does it really mean? Well, it's a signal that the artificial intelligence game is increasingly global. It's a recognition that innovation isn't just happening in the West. It gives researchers and businesses in Asia a chance to show off their work and build international collaborations. It could lead to new ways of thinking about artificial intelligence, driven by different cultural contexts and priorities. Yet let's not get carried away. This is also about geopolitics. Nations are vying for leadership in artificial intelligence and hosting a big conference is a way to flex your muscles and attract talent. We've seen this before in other areas of technology, with governments offering incentives to boost their domestic artificial intelligence industries. The question, though, is whether moving a conference actually fosters diversity of thought, or just extends existing power structures to new territories. Will it lead to genuine collaboration, or just a load of Western academics descending on Singapore with their pre-packaged solutions? One suspects the answer might be a bit of both. It's a start, at least, yet let's not pretend it's a complete levelling of the playing field. That raises a broader question: who really benefits from all this artificial intelligence development? We hear a lot about efficiency and productivity, yet what about the potential downsides? We've seen how artificial intelligence is now being touted as a potential solution to protecting Europe's underwater pipelines and cables. These networks, vital for energy and data transmission, are vulnerable to sabotage, as the Nord Stream pipeline explosions demonstrated rather dramatically. The idea is to use artificial intelligence to monitor these submerged assets analysing data from sensors to detect anomalies that might suggest damage or tampering. Algorithms would learn normal operational patterns and then flag deviations for human review. It's an early warning system that can identify potential threats before they escalate into crises, or to enable predictive maintenance. The significance here is about protecting critical infrastructure in an age of heightened geopolitical tension. These pipelines and cables are essential for Europe's energy independence and digital connectivity. Any disruption could have severe economic and social consequences. This also reflects a broader trend of applying artificial intelligence to security challenges, from border control to cyber defence. Of course, relying on artificial intelligence introduces its own set of challenges. The algorithms need to be robust against adversarial attacks and there's a risk of false positives that could trigger unnecessary alarms and divert resources. It's also worth remembering that artificial intelligence is only as good as the data it's trained on. If the data is incomplete or biased, the artificial intelligence's effectiveness will be limited. The assumption that more data automatically equals greater security is, at best, optimistic. As a result while using artificial intelligence to safeguard underwater infrastructure seems a logical step given the increasing threats and the potential consequences of failure we need to be realistic about its limitations. It's not a silver bullet, and it needs to be carefully implemented and constantly monitored. And, one might add, one wonders what happens if the artificial intelligence monitoring the pipelines gets hacked? Do we then have an artificial intelligence war under the sea? And that takes us to the next point: the rush to build these as a result-called "agentic artificial intelligence" systems – systems that can operate with a high degree of autonomy making decisions and taking actions without constant human oversight – is causing some companies to spend a great deal of money without properly considering the potential return on investment. The idea is that these systems can automate complex tasks, improve efficiency, and potentially reduce operational costs. Yet the development and deployment of these systems involves more than just clever algorithms. It requires careful planning, robust infrastructure, and, crucially, a clear understanding of the costs involved. The problem is that many organisations are leaping into agentic artificial intelligence development without fully assessing the financial implications. They're drawn in by the promise of increased productivity and competitive advantage yet they fail to adequately weigh those benefits against the actual costs. This can lead to systems that underperform fail to deliver the promised efficiencies and ultimately leave companies wondering where all the money went. It's not simply a matter of technical expertise; it's a strategic issue that can affect the long-term viability of a business. This highlights a broader trend: the pressure to adopt new technologies quickly, even if the business case isn't entirely clear. The fear of falling behind can drive companies to make significant investments without a clear understanding of the risks and potential rewards. And one can't help yet observe that this sort of gold rush mentality usually. benefits the vendors selling the picks and shovels more than it does the prospectors. It's a reminder that innovation without pragmatism can be a very expensive hobby. And we're seeing that play out in various sectors. Take warehouses, for example. They're increasingly turning to edge computing, rather than relying solely on the cloud. Now edge computing essentially means processing data closer to where it's collected – in this case inside the warehouse itself – rather than sending it off to a remote data centre. Think of it as having a local brain for immediate decisions, rather than relying on instructions from headquarters miles away. This is particularly useful when you have things like autonomous robots zipping around, needing to react in real-time. A slight delay in processing information – what's called "latency" – can. be a major problem when you're dealing with fast-moving machinery and tight deadlines. The impact here is fairly straightforward. Warehouses are under constant pressure to be faster and more efficient and the ability to make immediate decisions based on local data can significantly improve productivity. This has implications for supply chains, delivery times, and ultimately, the cost of goods. It also means less reliance on constant internet connectivity, which can be a weak point in many operations. We have seen similar shifts towards decentralised processing in other sectors driven by the need for speed reliability and a degree of autonomy. The more that artificial intelligence becomes capable of running locally, the more attractive this becomes. Of course all of this relies on the assumption that these edge devices are secure and that the data they collect is properly managed. One imagines a scenario where a rogue algorithm running on a warehouse robot decides to optimise its route by say dismantling the shelving. Suddenly, that efficiency gain doesn't seem quite as a result appealing. As a result while this trend will likely continue as warehouses strive to keep pace with the demands of modern commerce we need to be mindful of the potential risks. And that brings us to another, perhaps more fundamental, issue: people. Businesses are discovering that introducing artificial intelligence into the workplace isn't quite as simple as plugging it in and watching productivity soar. Apparently, the human element is proving rather… resistant. What we're really talking about is the fairly predictable anxiety employees feel when confronted with the prospect of automation. People are understandably worried about losing their jobs or having their roles fundamentally changed by these new systems. This fear, it turns out, can significantly slow down the adoption of artificial intelligence, which rather defeats the purpose. The advice being offered to management is to focus on communication and training. Engaging with employees, fostering trust, and reassuring them that they're valued – these are considered crucial. The idea is to present artificial intelligence as a tool to assist them, rather than a replacement. Companies are being encouraged to reskill workers, and create open forums for them to express their concerns. It's all about managing expectations and emotional readiness, apparently. Of course, the subtext here is the ongoing debate about the future of work in an age of increasing automation. Are these genuine efforts to support employees, or simply a PR exercise to smooth the transition to a leaner, more automated workforce? One wonders if the organisations as a result keen to deploy artificial intelligence have truly considered the long-term societal implications of widespread job displacement or if they are simply focused on short-term gains. Perhaps a little less enthusiasm for the technology and a little more empathy. for the people whose livelihoods are at stake would be a good start. And the impact of artificial intelligence isn't just limited to the workplace. We're seeing it seep into all aspects of our lives, including our finances. A recent study suggests that a significant number of younger adults in the. UK are now using artificial intelligence tools to help them manage their money. The research indicates that many are struggling to save as much as they'd like, and are turning to artificial intelligence for guidance. In essence, these individuals are using apps and platforms that offer financial advice, budgeting assistance, and investment suggestions, all driven by algorithms. The appeal seems to be that these tools are more accessible and relatable than traditional financial advisors who are often perceived as expensive or out of touch. The artificial intelligence promises tailored advice, analyzing spending habits and offering actionable insights. The real-world impact here is potentially quite significant. If a large segment of the population is relying on artificial intelligence for financial decisions that raises questions about the quality and reliability of that advice. Are these algorithms truly capable of understanding individual circumstances, or are they simply churning out generic recommendations? There's also the issue of trust: are people fully aware of the. risks involved in handing over their financial data and decision-making to a machine? This trend also reflects a broader pattern of increasing reliance on technology to solve complex problems. We've seen similar trends in areas like healthcare and education, where artificial intelligence is being touted as a solution to various challenges. Yet as with any technology, it's important to approach artificial intelligence with a healthy dose of scepticism. One wonders, of course, if this generation has simply outsourced common sense. Perhaps it's a sign of the times that we're now looking to machines to tell us how to save money rather than say learning from our parents or simply spending less. And what happens when the artificial intelligence financial advisor makes a bad call? Who carries the can then? And speaking of areas where the stakes are high AstraZeneca the pharmaceutical giant is making a significant investment in artificial intelligence specifically to accelerate its oncology research. The aim is to use artificial intelligence to sift through the vast quantities of data generated during drug development hoping to identify patterns and insights that might otherwise be missed. In practice, this means deploying machine learning models to analyse everything from patient records and clinical trial results to genomic data. The goal is to speed up the process of identifying promising drug candidates optimising clinical trial design and ultimately developing more effective cancer treatments. The shift towards in-house artificial intelligence development suggests a need for tailored solutions, rather than relying on generic artificial intelligence tools. The stakes are high, both financially and in terms of patient outcomes. Cancer research is an expensive and time-consuming endeavour, and any technology that can streamline the process could have a significant impact. More efficient research could lead to faster drug approvals, bringing new treatments to patients sooner. It also ties into the broader trend of using artificial intelligence to personalise medicine tailoring treatments to individual patients based on their unique characteristics. Of course, there are potential pitfalls. Integrating artificial intelligence into complex research workflows requires careful planning and execution. It's not simply a matter of plugging in a piece of software and expecting miracles. There's also the risk of over-reliance on artificial intelligence, potentially overlooking valuable insights that human researchers might have spotted. One wonders if there's a touch of 'not invented here' syndrome at play with AstraZeneca preferring to build its own artificial intelligence rather than relying on existing solutions. Yet if they get it right, it could give them a considerable competitive advantage in the race to develop new cancer therapies. And that competition, as always, is what will really drive the innovation here. Yet let's not get too carried away with the idea of artificial intelligence revolutionising healthcare just yet. We've seen a flurry of activity from the big artificial intelligence labs this month with OpenAI Google and Anthropic all releasing medical artificial intelligence tools within days of each other. It seems everyone wants to be your doctor, or at least, play one on television. What we're really seeing is a land grab. These companies are rushing to plant their flags in the potentially lucrative territory of artificial intelligence-assisted medical diagnostics. The reality, that said, is somewhat less impressive. None of these new offerings have been approved for actual clinical use. Think of it as a kind of sophisticated medical chatbot – able to offer insights perhaps yet certainly not able to prescribe treatments or make diagnoses. OpenAI's ChatGPT Health for example sits in a regulatory grey area which is a polite way of saying it's not really a medical device. This matters because healthcare is a heavily regulated industry, and for good reason. The potential for harm is significant, and the standards for accuracy and reliability are understandably high. While these artificial intelligence tools might eventually prove useful, they're currently more about marketing and investor relations than actual patient care. The narrative is of a revolution in healthcare, yet the reality is they can't be used in real-world medical settings. It's another example of the wider trend of artificial intelligence companies racing to stake their claim in a promising new market even if the technology isn't quite ready for prime time. This rush can be useful – competition often drives innovation – yet. it also raises questions about whether genuine advancements are being obscured by hype. Are we seeing the future of medicine, or just a carefully orchestrated attempt to impress investors? Perhaps I'm just old-fashioned, yet I tend to prefer my medical advice from qualified doctors, not algorithms designed to maximise shareholder value. It all points to the need for robust regulatory oversight as artificial intelligence continues to infiltrate new sectors. Now, all of these artificial intelligence tools, whether they're used for medicine, finance, or anything else, require serious computing power. And that's leading to another bottleneck, not in the algorithms themselves, yet in the hardware needed to run them. Specifically, businesses are finding that graphics processing units, or GPUs, are now essential for scaling up their artificial intelligence operations. GPUs, originally designed for rendering images in video games, excel at parallel processing. This means they can perform many calculations simultaneously, which is exactly what's needed for training and running these complex artificial intelligence models. What started as a research project is now being deployed in customer service, decision support, and general automation. As a result, the ability to efficiently crunch vast datasets is no longer a nice-to-have; it's a competitive necessity. This matters because it shifts the power dynamic. It's no longer just about who has the cleverest algorithm, yet who has access to the most powerful computing infrastructure. Companies that can effectively leverage GPUs will gain a significant advantage, improving efficiency and offering more responsive artificial intelligence-driven services. This also consolidates power in the hands of the manufacturers of those GPUs and the cloud providers who can offer access to them at scale. We're seeing a repeat of the old adage that during a gold rush the ones who make the real money are those selling the shovels. Of course, relying heavily on specialized hardware creates its own vulnerabilities. What happens when the supply chain is disrupted, or when a new, more efficient architecture emerges? The history of technology is littered with examples of companies betting big on the wrong horse. Perhaps this GPU dependency will prove to be a strategic advantage or perhaps it will become an expensive inflexible bottleneck in its own right. For now though the message is clear: if you want to play in the artificial intelligence arena you're going to need some serious processing power. And that means serious money. And where is all that processing power going? Well, increasingly, it's being used to try and sell us things. Google, for example, is now deploying artificial intelligence-powered shopping assistants. These digital helpers will handle tasks like product discovery, recommendations, and even customer service. The idea is to make online shopping more efficient. In practice this means an artificial intelligence will learn your preferences and suggest items potentially before you've even consciously thought of needing them. It's about streamlining the process, cutting down on endless scrolling and filtering. The implications are considerable. For consumers, it's a trade-off between convenience and data privacy. How much information are we willing to share to avoid a bit of online searching? We've all experienced the slightly unnerving feeling of seeing an online advertisement for something we've only just discussed and this trend only amplifies that. For retailers it could widen the gap between large companies with the resources to implement artificial intelligence and smaller businesses that may struggle to compete. The stakes become higher, and the playing field less level. This push for artificial intelligence-driven shopping aligns with a broader trend: the automation of everyday tasks. It promises efficiency, yet it also raises questions about control, and who ultimately benefits from these advancements. The question is whether it's the consumer or the corporation that will truly profit. One wonders if in the quest for a frictionless shopping experience we might find ourselves sleepwalking into a world where our desires are anticipated and fulfilled a little too readily. A world where the serendipity of discovery is replaced by the predictability of algorithms. After all, some of us rather enjoy the accidental find. And to make this all even easier Google has announced a new open-source standard intended to streamline how artificial intelligence shopping assistants work. It's called the Universal Commerce Protocol, or UCP. The idea is to allow artificial intelligence agents to do more than simply suggest products. Instead, they could, in theory, handle entire transactions from start to finish within a chat window. Think of asking your artificial intelligence to buy a particular book, and it does as a result without you ever leaving the conversation. This hinges on creating a common language as a result to speak between artificial intelligence agents and retailer systems allowing them to communicate seamlessly from product search to final payment. The potential impact is considerable. If successful, it could drastically alter online shopping habits, making it far more convenient. Imagine delegating all your online purchases to an artificial intelligence assistant. Of course, this relies on consumers trusting artificial intelligence with their payment information, and on retailers adopting this new protocol. Given the prevalence of online scams, this raises significant questions about security and the potential for abuse. This development fits into a broader trend of automating more and more aspects of our lives with artificial intelligence. We've seen similar moves in areas like finance and healthcare where the promise of increased efficiency is often weighed against concerns about data privacy and algorithmic bias. The notion of handing over purchasing power to an artificial intelligence is appealing, in theory. Yet I can't help yet recall the last time I had to dispute an unauthorized charge with my bank and wonder if this is simply adding another layer of abstraction between us and our money. And it's not just shopping that artificial intelligence is trying to simplify. Anthropic, the outfit behind the Claude artificial intelligence, has launched a new feature called Cowork. It's essentially a local file management system powered by their artificial intelligence, currently in a research preview for Mac users. The idea is to let Claude manage your files directly on your computer. Think of it as a digital assistant that can organise documents perhaps edit them and retrieve data all without you needing to write any code. They are pitching it as a way to streamline routine tasks and. improve productivity for those of us who are not necessarily technically inclined. This matters because it reflects a broader trend. We' Well, another week, another deluge of developments. Sorting the wheat from the chaff is becoming a full-time occupation, isn't it? If you'd like a daily digest of the key stories without the hype you can find it at jonathan dash harris dot online. And if you're looking for a more considered exploration of artificial intelligence in a specific context my book "Smart Buildings: artificial intelligence-Powered Efficiency and Sustainability" might be of interest. You can find details at books dot jonathan dash harris dot online slash ai dash buildings. That's it for this week's Turing's Torch. Keep the flame burning, stay curious, and I'll see you next week with more artificial intelligence insights that matter. I'm Jonathan Harris—keep building the future.