Good morning, London. It's Friday, and while the skies may be partly cloudy, we're here to clear up the murky waters of artificial intelligence. Alan Turing once remarked "The idea behind digital computers may be explained by saying that these machines are intended to carry out any operations which could be done by a human computer." This essence of computing underscores our mission today—demystifying artificial intelligence and making it accessible to all. Tired of drowning in artificial intelligence headlines? Ready for clarity, insight, and a direct line to the pulse of innovation? Welcome to Turing's Torch: artificial intelligence Weekly! I'm Jonathan Harris your host and I'm cutting through the noise to bring you the most critical artificial intelligence developments explained analysed and delivered straight to you. Let's ignite your understanding of artificial intelligence, together. The world seems to be coalescing around a few central obsessions at the moment many of which involve artificial intelligence in one way or another. We're seeing governments throw enormous sums of money at the problem companies scrambling to integrate it into their operations and regulators trying to figure out how to keep it all from going completely off the rails. It's a bit like watching a very complex, very expensive, and potentially very dangerous machine being built in real time. One of the big buzzwords floating around is "sovereign artificial intelligence." Countries are suddenly very keen on the idea of controlling their own artificial intelligence capabilities building domestic data centres training models locally and generally becoming self-sufficient in the artificial intelligence space. The idea, of course, is to avoid relying on other nations for this critical technology. Geopolitical tensions are driving this, along with understandable concerns about data privacy and security. The problem, as with as a result many grand ambitions, is that the reality is a bit more complicated. The tech world is interconnected. Supply chains are global. Take semiconductors, for instance. A handful of countries dominate their manufacture. As a result even if a nation builds its own data centres and trains its own models it's still dependent on others for the hardware. The talent pool is global, too. And then there's data itself, the very fuel that powers these models. It crosses borders constantly, which rather complicates the notion of sovereignty. Data localisation efforts, which aim to keep data within a country's borders, can stifle innovation. You end up with isolated artificial intelligence systems that can't learn from global data a patchwork of technologies that might be less effective. The ambition is understandable, yet the interconnectedness of technology suggests that true independence in artificial intelligence is more theoretical than practical. It's a bit like trying to build a self-sufficient island in the middle of the ocean. You can try, yet you're always going to be reliant on the outside world in some way or another. And as a result we're left wondering if these massive investments in sovereign artificial intelligence will yield the desired returns or simply create expensive isolated systems that are ultimately less effective than they could be. Perhaps it's another example of technology outpacing politics, a recurring theme in the modern world. And while governments are busy trying to carve out their own artificial intelligence territories the tech industry itself is engaged in a rather lively debate about the best way to actually build these systems. Yann LeCun a name familiar to anyone in the artificial intelligence field has made a rather public wager against the current obsession with large language models. He believes that pouring resources into these models is a fundamentally misguided strategy. Now, when we talk about "large language models," we're essentially discussing artificial intelligence systems trained on massive datasets of text. They're very good at generating human-like prose, translating languages, and even writing different kinds of creative content. Yet LeCun argues that this approach is a dead end that these models are essentially very sophisticated parrots capable of mimicking human language yet lacking genuine understanding. He's advocating for a return to what he calls "world models" – artificial. intelligence systems that attempt to understand and represent the environment in which they operate. If LeCun is correct, the billions being invested in large language models could be misdirected. It suggests that the future of artificial intelligence lies not in simply scaling up existing technologies yet in developing entirely new approaches that prioritise comprehension and reasoning. It also highlights the inherent risks in following the herd, in allowing hype to dictate investment decisions. We've seen this pattern before, of course: the dot-com boom, cryptocurrency mania, and now, perhaps, the artificial intelligence gold rush. It's a timely reminder that progress isn't always linear. It's easy to be swept up in the excitement surrounding these artificial intelligence systems, yet it's crucial to maintain a critical perspective. Ultimately whether his bet pays off remains to be seen yet it certainly adds a welcome dose of contrarianism to the ongoing discussion. It's a reminder that sometimes the most valuable insights come from questioning the prevailing wisdom. And that questioning extends beyond the fundamental architecture of artificial intelligence to the more practical considerations of how it's being used. One area where this is particularly acute is in healthcare. People are increasingly turning to artificial intelligence models specifically large language models for health-related queries essentially asking the machines for a diagnosis instead of consulting a doctor or even using traditional search engines. This represents a move away from sifting through potentially unreliable web pages to receiving. what appears to be a coherent and contextually relevant response from an artificial intelligence. That said, these models are only as good as the data they're trained on, and that data can include outdated or incorrect information. Unlike a human doctor, an artificial intelligence cannot conduct physical examinations or truly understand the nuances of an individual's health history. The stakes are high. If people start relying on artificial intelligence for serious health decisions, the potential for misdiagnosis and inappropriate treatment is very real. There are also significant privacy concerns surrounding the sharing of sensitive health information with these systems. Data security and confidentiality must be paramount, yet that is not always assured. This trend underscores a broader pattern we have observed with the rise of. artificial intelligence: the temptation to prioritise speed and convenience over thoroughness and accuracy. The promise of personalised and efficient health information is alluring yet we must be careful not to let that blind us to the very real risks. After all it's one thing to ask an artificial intelligence for the capital of Paraguay yet quite another to ask it for medical advice. OpenAI it seems has released a version of ChatGPT tailored for health advice promising to connect with medical records and fitness apps to provide personalised insights. It aims to synthesise data from your Apple Health your MyFitnessPal and various other sources and then offer interpretations and advice based on that amalgamated information. The sales pitch, of course, is empowerment through data. The medical community, that said, is expressing some reservations, and understandably as a result. The core concern is that individuals might misinterpret the artificial intelligence's analysis potentially leading to self-diagnosis or worse a delay in seeking proper medical attention. The worry is not that artificial intelligence is inherently bad yet that people may place undue trust in its pronouncements effectively sidelining the expertise of qualified physicians. Data privacy is naturally another looming concern given the sensitivity of the information involved and the increasing number of health apps collecting data. The potential for breaches or misuse is a valid worry. This development fits into a broader trend of artificial intelligence encroaching upon areas traditionally reserved for human professionals. We have seen it in law, in finance, and now, with particular urgency, in healthcare. The question is not whether artificial intelligence can augment these fields yet whether we are adequately prepared for the potential consequences of over-reliance and the erosion of human oversight. One can envision a future where individuals armed with artificial intelligence-driven health reports become amateur diagnosticians flooding doctors' offices with anxieties based on algorithmic interpretations. A little knowledge as they say is a dangerous thing and perhaps even more as a result when it is delivered with the confident tone of a large language model. Perhaps it is time we all remembered that a chatbot is no substitute for a qualified medical professional. Moving away from the health sector, we see similar dynamics at play in other areas. The cryptocurrency markets for example have been rather volatile and one consequence is that certain faster more efficient blockchains are becoming more popular. Solana, in particular, seems to be attracting attention as its price stabilises. When people talk about "layer-1 blockchains," they're essentially referring to the foundational networks on which cryptocurrencies and decentralised applications are built. Think of it as the underlying infrastructure, like the roads and bridges of a digital city. The promise is that these newer networks can process thousands of transactions per second far more than older systems like Bitcoin or even Ethereum and without the high fees that have plagued them. This matters because in the fast-paced world of crypto, speed and cost are paramount. Investors are increasingly impatient with slow transaction times and exorbitant fees. They're looking for platforms that can handle the increasing demand for quick and affordable transactions. The implication is that blockchains that can deliver on this promise are more likely to thrive, attracting more users and investment. This trend also highlights a broader theme in the technology sector: the relentless pursuit of efficiency. Whether it's faster processors, more efficient algorithms, or streamlined user interfaces, the demand for better performance is a constant driver of innovation. And in a market as competitive as crypto, that drive is amplified. It's worth remembering that the crypto world is prone to hype and that. promises of superior performance should always be taken with a grain of salt. Just because a blockchain can theoretically handle thousands of transactions per second doesn't necessarily mean it will in practice especially under heavy load. And of course, speed and low fees are only part of the equation. Security, reliability, and decentralisation are equally important. The question is whether users will remember that in the heat of the moment. For now, it seems, the market is rewarding those blockchains that at least give the impression of being faster and cheaper. Of course behind all the hype and the headlines there's a whole community of researchers quietly plugging away laying the groundwork for future advancements. The Association for the Advancement of Artificial Intelligence, or AAAI, recently handed out its annual awards for outstanding papers. Think of it as a sort of academic Oscars ceremony for the artificial intelligence world. These awards are given out each year to recognise research papers that the judges deem particularly innovative and well-written. A committee of experts carefully reviews submissions and selects those that really stand out. It's intended to showcase work that pushes the boundaries of knowledge in the field. Now while the names of the winners may not be familiar to the general public their work is likely to have a real impact on the direction of artificial intelligence research going forward. It's about setting a benchmark, inspiring others to pursue similar levels of excellence. The hope is that this recognition will encourage further exploration and development in what is, after all, a very rapidly evolving field. It's easy to get caught up in the hype surrounding artificial intelligence to focus on the latest consumer gadgets or the next big tech breakthrough. Yet sometimes it's worth remembering that behind all the flashy headlines there's a whole community of researchers quietly plugging away laying the groundwork for future advancements. These awards, in a way, shine a light on that often-overlooked aspect of the artificial intelligence landscape. Of course one might also observe that any field which requires an "Association for the Advancement of" probably has some people somewhere who are actively trying to hold it back. That's something to consider, isn't it? And speaking of those who may be holding things back it's worth noting that the pursuit of cheaper artificial intelligence is running headlong into concerns about where the data comes from and who controls it. For the last year or as a result, everyone's been obsessed with how powerful these systems are, typically measured by somewhat dubious benchmark scores. Now that companies are actually trying to use artificial intelligence, the conversation is changing. What this really means is that organisations are waking up to the fact. that simply chasing the most powerful or the cheapest artificial intelligence can create problems. They need to ensure that their artificial intelligence systems comply with various regulations about data ownership and privacy which becomes increasingly complicated as businesses operate across different countries each with its own rules about where data is stored and how it's processed. Cutting corners to save money can lead to significant legal and ethical problems later on. This is important because the potential cost benefits of artificial intelligence are tempting yet they can't come at the expense of ethical responsibilities or legal obligations. Companies are under increasing regulatory pressure to be responsible with data. It requires a blend of technical skill and ethical foresight, and many organisations are finding this difficult to achieve. It's not just about the technology itself yet about how businesses understand and manage risk in a world where data is incredibly valuable. We're seeing a broader pattern: a growing awareness that the relentless pursuit of. artificial intelligence capabilities needs to be tempered with considerations of responsibility and control. The pendulum is swinging, or at least wobbling, away from pure technological ambition and towards something more grounded in real-world constraints. One wonders if perhaps the artificial intelligence arms race was a little premature like investing in faster horses just before the invention of the internal combustion engine. Still, better late than never to consider the implications of these technologies. And that, in turn, raises questions about the future direction of artificial intelligence development. This tension between ambition and responsibility is playing out in various ways. SAP the software giant is joining forces with Fresenius the healthcare group to build an artificial intelligence platform specifically for the medical sector. The idea is to create a secure space for processing data, addressing concerns about patient confidentiality. Now, the term "sovereign artificial intelligence platform" sounds rather grand, doesn't it? Strip away the marketing and it means they're building a system where data is processed in a controlled environment presumably one that adheres to strict data protection rules. The healthcare industry has been understandably hesitant about fully embracing artificial intelligence, given the sensitivity of patient records. Standard cloud-based solutions often fall short of meeting the necessary governance requirements. The significance here is about trust and control. Healthcare data is incredibly valuable, yet also incredibly sensitive. If SAP and Fresenius can demonstrate that artificial intelligence can be used safely and responsibly in this context it could unlock a wave of innovation. This is about striking a balance between leveraging the power of artificial. intelligence to improve healthcare outcomes and ensuring that patient privacy is not compromised. There's also a potential commercial advantage, of course, in being seen as a trusted partner in this space. We are seeing a broader trend, though, of organisations wanting more control over their data. The initial rush to offload everything to the cloud is now being tempered by a realisation that some things are best kept closer to home particularly when dealing with sensitive information or regulated industries. Whether this particular venture proves successful remains to be seen. One can imagine that negotiating the regulatory landscape alone will keep teams of lawyers busy for quite some time. Still, it's a reminder that technology, for all its potential, is only as good as the framework in which it operates. Citi, the banking group, is attempting a rather comprehensive internal rollout of artificial intelligence tools across its 4,000-strong workforce. Instead of keeping artificial intelligence confined to specialist teams they are pushing it out to a much broader range of employees hoping to integrate it into daily operations. Now, what does that actually mean? Well, it suggests that Citi is moving beyond the experimental phase with artificial intelligence. We're not just talking about a few data scientists tinkering in a corner. Instead, they're trying to get ordinary employees to use artificial intelligence in their everyday tasks. Think of it as providing artificial intelligence-powered assistants for everything from data analysis to customer service, perhaps even compliance and risk management. The stakes here are considerable. If Citi can successfully embed artificial intelligence into its operations, it could gain a significant competitive advantage. Faster processing, more accurate analysis, improved customer service – these are all potential benefits that translate directly into increased profitability. Moreover, this could set a precedent for other large organisations, particularly in the financial sector, to follow suit. We have discussed before the creeping automation of white-collar roles; this is simply another example of that trend. That said, there is the inevitable question of job displacement. If artificial intelligence can handle many of the tasks currently performed by human employees, what happens to those employees? Citi will no doubt claim that this is about augmenting human capabilities, not replacing them. Yet history suggests that technological advancements often lead to workforce reductions, sooner or later. One imagines that a good many of those four thousand employees will be updating their CVs, just in case. Legal technology firm IVO has secured \$55 million in funding, suggesting a growing appetite for artificial intelligence in the legal sector. This investment round, led by Blackbird, values IVO at around \$355 million, which is a fairly significant sum. Now, what does "legal tech" actually mean? In this case, it seems IVO is developing artificial intelligence tools aimed at automating and streamlining various legal processes. Think document review, legal research, perhaps even drafting initial versions of contracts or legal briefs. The idea is to make legal work faster, cheaper, and more efficient. The implications here are considerable. The legal profession has traditionally been quite resistant to technological change yet this investment indicates that investors believe artificial intelligence can make inroads. If artificial intelligence can automate some of the more routine tasks currently performed by lawyers and paralegals it could lead to significant cost savings for clients. It might also free up legal professionals to focus on more complex and strategic work. On the other hand, it could also displace some jobs, particularly those involving repetitive tasks. The broader trend is that artificial intelligence is increasingly being applied across a wide range of industries and the legal sector is no exception. The question is whether the legal profession will embrace these tools and adapt or whether they will resist and risk being left behind. One does wonder, though, whether artificial intelligence can truly grasp the nuances of legal reasoning and ethical considerations. Perhaps it can handle the drudgery, yet can it truly understand the spirit of the law? For the moment this injection of capital suggests that the legal world may be about to change more rapidly than some within it might prefer. Vercel, the cloud platform company, has released a set of pre-packaged tools intended to simplify the development of artificial intelligence coding agents. These "agent-skills," as they call them are designed to encapsulate coding best practices in a way that developers can easily integrate into their workflows. In practice this means taking years of accumulated wisdom about optimising code particularly for React and Next.js applications and turning it into something akin to a software package that can be installed with a single command. Think of it as a curated set of shortcuts and pre-written routines designed to improve code efficiency streamline web design reviews and manage deployments on the Vercel platform. The potential impact here is on developer productivity. If these agent-skills genuinely reduce the time and effort required to build and maintain applications they could become a valuable asset for teams under pressure to deliver quickly. That said the real test will be whether they can capture the more subtle and nuanced aspects of coding or whether they end up adding another layer of complexity to an already crowded landscape of tools. This also fits into a broader trend we're seeing, which is the attempt to standardise and productise aspects of artificial intelligence development. The idea is to move beyond the experimental phase and create tools that are reliable, repeatable, and easy to use. The challenge of course is that coding is rarely a one-size-fits-all activity and what works well in one context may be entirely inappropriate in another. One wonders if this is simply an attempt to commoditise expertise. Will these agent-skills truly empower developers or will they in the end simply lead to a more homogenised and predictable style of coding? Nous Research has released a new artificial intelligence model called NousCoder-14B, specifically designed for competitive programming. It seems to be quite good at solving complex coding problems. Competitive programming is essentially a sport where people write code to solve puzzles, often under intense time pressure. The new model is built on an existing framework yet has been further trained using what's called reinforcement learning which basically means it learns by trial and error getting rewards for correct solutions. The developers are claiming that it outperforms its predecessor by a significant margin on a specific benchmark test. Why does this matter? Well, competitive programming, while niche, is a good proving ground for artificial intelligence's ability to automate software development. If these models become proficient Well, another week gone by, and another torrent of artificial intelligence announcements. The signal-to-noise ratio, I think we can all agree, continues to require careful management. For a daily dose of curated clarity, you can sign up for the artificial intelligence briefing at jonathan-harris.online. This week's podcast was brought to you by my book artificial intelligence in Aviation: Transforming Safety and Sustainability available at books.jonathan-harris.online/ai-aviation—a more considered look at the topic for those who prefer substance to speculation. That's it for this week's Turing's Torch. Keep the flame burning, stay curious, and I'll see you next week with more artificial intelligence insights that matter. I'm Jonathan Harris—keep building the future.