Good afternoon. It's Friday and partly cloudy in London though the real weather we're concerned with is the storm of headlines swirling around artificial intelligence. As Alan Turing wisely noted "A very large part of space-time must be investigated if reliable results are to be obtained." In that spirit our mission today is to explore the complexities of artificial intelligence peeling back the layers of hype to uncover the solid ground beneath. Tired of drowning in artificial intelligence headlines? Ready for clarity, insight, and a direct line to the pulse of innovation? Welcome to Turing's Torch: artificial intelligence Weekly! I'm Jonathan Harris your host and I'm cutting through the noise to bring you the most critical artificial intelligence developments explained analysed and delivered straight to you. Let's ignite your understanding of artificial intelligence, together. Right then, let's have a look at what's been rattling around the tech world this week. It seems artificial intelligence is continuing its relentless march into, well, everything. Yet beneath the hype, there are some interesting, and occasionally troubling, developments worth unpacking. First up, money. Or rather, where it's being spent. A recent survey suggests that finance chiefs at large British companies are feeling rather chipper about investing in artificial intelligence. Now, these are the people who control the purse strings, the ones who sign off on the big bets. And they apparently believe that spending on artificial intelligence could make their businesses more productive, even amidst all the current economic uncertainty. They seem to view digital tools, artificial intelligence in particular, as essential for growth. Now, that's a notable shift. Businesses, traditionally, have been a bit wary of new technologies. They like to see a proven track record before opening the chequebook. As a result, what's changed? Well, the pressure to remain competitive is immense. Everyone's looking for an edge, a way to do more with less. And artificial intelligence, with its promise of automation and efficiency, looks increasingly like the answer. The implications of this are pretty significant. If CFOs are willing to back artificial intelligence, it could lead to some fundamental changes in how companies operate. New business models, new ways of working, perhaps even entirely new industries. Of course it also raises questions about jobs about who benefits from these productivity gains and about the overall shape of the economy. It's easy to be optimistic when you are spending someone else's money and the track record of large firms successfully implementing complex artificial intelligence projects is shall we say patchy. It remains to be seen whether these investments will actually deliver the promised productivity gains or simply add to the growing pile of failed technology initiatives. And it's not just British firms. Bosch, the German engineering giant, is ploughing nearly three billion euros into artificial intelligence for its manufacturing operations. The idea is to collect and analyse vast amounts of data from factory floors. Think cameras watching production lines, sensors monitoring every machine, software logging every step. The goal is to use artificial intelligence to make sense of all this to spot problems before they happen and to make factories more efficient. They're not alone, of course. We're seeing more and more companies trying to use artificial intelligence to optimise existing processes, and manufacturing is a prime target. Fewer breakdowns, faster production, a bigger profit margin – the potential benefits are enormous. Yet Bosch themselves admit that many manufacturers including them are already drowning in data yet they still struggle to turn it into useful action. As a result, this investment is about trying to close that gap, turning raw data into something that actually improves how things are made. Now, whether this massive investment will actually pay off is an open question. It's easy to collect data; it's much harder to turn it into something genuinely useful. One suspects that the factories of the future may well look a lot like the factories of today only with slightly more blinking lights and the faint hum of servers in the background. Yet it's a significant bet on the power of artificial intelligence to transform a very traditional industry. And it's a reminder that the promise of artificial intelligence isn't just about creating entirely new things it's also about making existing things better faster and cheaper. Yet, and it's a big yet, all this talk of productivity and efficiency often overlooks a crucial aspect: circulation. What I mean by that is the way goods and services actually move through the economy and reach consumers. The argument is that simply focusing on artificial intelligence's potential to increase production could. lead to a situation where we have more goods and services than people can afford. If the benefits of artificial intelligence are concentrated in the hands of a few we could end up with a very efficient yet deeply unequal economy. The challenge then becomes ensuring that these productivity gains translate into broader economic benefits, and that the wealth is more widely distributed. It's not just about increasing GDP, yet about improving living standards for everyone. This echoes concerns we've heard previously about the potential for artificial intelligence to exacerbate existing inequalities in the job market further widening the gap between those who benefit from technological progress and those who are left behind. And it's not just about wealth distribution. There's also the question of jobs. One forecast suggests that by 2026, businesses will be deploying task-specific artificial intelligence agents, essentially digital interns, to handle routine operations. Now the term "artificial intelligence agent" can sound rather grand yet in essence we're talking about programs designed for specific jobs rather than the all-purpose chatbots we've become accustomed to. Think of it as moving from a Swiss Army knife to a set of specialised tools. The idea is these artificial intelligence systems will integrate directly into workflows automating tasks like data analysis or handling basic customer inquiries freeing up human employees for more complex work. If businesses adopt these systems widely, we could see a shift in the structure of the workforce. The promise, of course, is increased productivity and efficiency. The risk is, inevitably, job displacement, particularly in roles involving repetitive tasks. This also raises ethical questions about the level of autonomy we're willing to grant these machines. It feeds into the ongoing debate about how artificial intelligence is reshaping the nature of work itself a debate we've touched on repeatedly this past year. It all sounds rather utopian, or dystopian, depending on your point of view. These systems will likely be more effective at some tasks than others, and integrating them into existing workflows will undoubtedly present challenges. It's worth remembering that the paperless office has been just around the corner for decades now. I'll believe in artificial intelligence interns when they can actually make a decent cup of tea. Moving on, let's consider the chatbots. There's a flurry of new entrants to the market, all promising a more "natural" or "human-like" conversation experience. We've got "Flipped," "Yollo artificial intelligence," and "RushChat," all vying for our attention. The common thread is a desire to move away from the rigid, pre-scripted interactions we've come to expect. They're trying to create a more continuous exchange, supposedly letting you explore thoughts and scenarios more organically. The interfaces often pre-load a character's name image and some context to give you a sense of who you're "chatting" with and to set expectations. It's meant to mimic real-life discussions, where you don't have to re-establish context with every sentence. The practical impact of this is that companies are hoping to make chatbots more engaging and, frankly, less annoying to use. If people find them more pleasant, they might use them more often for customer service or even just for casual interaction. This could shift how businesses interact with customers, potentially reducing the need for human agents in certain situations. It is also another demonstration of the continued pressure on workers to be replaced by machines. Of course, the claim that this feels like a "real-life discussion" is a bit of a stretch. It's still a computer program generating text based on algorithms. The "personality" is just a carefully crafted illusion. And while it might be less stilted than some other chatbots, it's still a long way from a genuine human connection. As these models become more sophisticated, the line between human and machine becomes increasingly blurred. One might argue that the quest for "natural" conversation with a chatbot is fundamentally misguided. After all, we know it's not a real person. Is it not somewhat unsettling to seek genuine connection from an artificial entity? Perhaps we should be focusing on making artificial intelligence more efficient and transparent rather than trying to trick ourselves into thinking we're having a meaningful conversation. Still if it manages to make the experience of interacting with a chatbot slightly less painful it might be a step in the right direction. At any rate it will be interesting to see if these new chatbots can break through the chatbot fatigue that seems to be setting in. And it's not just chatbots. We're seeing similar trends in artificial intelligence image generation. There's a new platform called "Flipped" which is pitching itself as the freewheeling alternative to more restrictive platforms. The core idea is straightforward: you describe the image you want, specifying details like clothing, setting, and mood. The system then generates an image based on that description. The emphasis is on user control and creative freedom apparently allowing for the generation of adult content without the limitations imposed by other services. It's designed to feel more like a conversation than a technical exercise, according to its proponents. Many artificial intelligence image generators have strict content policies, which can frustrate users who feel creatively stifled. Flipped is aiming to capture that market by offering a more permissive environment. This could appeal to professional artists, hobbyists, or anyone who feels that existing platforms are too restrictive. This also touches on the ongoing debate about the role of artificial intelligence in creative expression. Are these tools meant to be tightly controlled and sanitised or should they allow for a broader range of creative exploration even if that includes potentially controversial content? And who gets to decide what constitutes "controversial," anyway? It's easy to imagine the usual suspects getting themselves worked up about this. Yet I wonder if in the long run a platform that prioritises user input above all else will simply become a race to the bottom flooded with the kind of content that makes even the most hardened internet users wince. Still, it's a free country, or at least it was the last time I checked. Meanwhile, a rather large percentage of Britons, around 59 per cent, are now using artificial intelligence for self-diagnosis of medical conditions. This figure comes from research conducted by a life insurance comparison website and it indicates a growing reliance on artificial intelligence for health-related information. People are using artificial intelligence-powered tools often accessed through simple search engines to investigate symptoms explore treatment options and even check for potential side effects of medications. In essence, they are consulting with a digital doctor before, or perhaps instead of, consulting with a real one. On the one hand, it could lead to greater health awareness and proactive management among the population. If people are more informed, they might be more likely to seek professional help when necessary. That said, there's also the risk of misdiagnosis and improper treatment. artificial intelligence, for all its advancements, cannot replace the nuanced judgment of a qualified healthcare professional. The study suggests that a significant minority around 11 per cent may be using artificial intelligence exclusively for serious health assessments which is a worrying prospect. This trend also raises questions about the spread of misinformation. The internet is already awash with inaccurate or misleading health advice and artificial intelligence systems are only as good as the data they are trained on. If that data is biased or incomplete, the advice they provide could be equally flawed. We may see this as part of a broader pattern of individuals seeking out alternative or automated sources of expertise perhaps driven by convenience or a distrust of established institutions. One wonders if people actually believe what these systems tell them or whether they are simply looking for confirmation of their existing fears. Either way, it seems unwise to trust your health to an algorithm whose primary purpose is to serve you targeted ads. The key as ever is to maintain a healthy dose of scepticism and remember that technology is a tool not a replacement for human expertise. And perhaps a real doctor. Now behind all these shiny new applications and services there's a lot of hard graft going on in the artificial intelligence research labs. And one of the persistent challenges they face is training these ever-larger language models. We're hearing more about the challenges of training very large language models and a new technique called DeepSeek mHC is attempting to address one of the most persistent: training instability. In essence, as these models become larger and more complex, the process of teaching them becomes increasingly erratic. It's like trying to balance a tower of bricks; the more bricks you add, the more likely it is to topple. DeepSeek's approach Manifold-Constrained Hyper-Connections is a way of rethinking how the connections within the model operate as it scales up with the aim of keeping things stable. If researchers can't reliably train these massive models, progress in artificial intelligence will be hampered. Instability leads to unpredictable results, wasted resources, and slower innovation. If DeepSeek's method, or something like it, proves successful, it could unlock further advances in artificial intelligence capabilities. It's also worth noting that as these models are integrated into more and more applications from chatbots to medical diagnosis their reliability becomes paramount. An unstable model is, by definition, an unreliable one. This also fits into a wider pattern we're seeing: the increasing concentration of. power in the hands of those who can afford to train these behemoth models. Techniques that reduce the cost and complexity of training even incrementally could help to democratise access to advanced artificial intelligence or at least slow down the centralising effect. The DeepSeek team has been wrestling with a technical problem in the training of large language models. It appears that certain design choices intended to improve performance are, in practice, causing instability as these models get larger. The issue boils down to something called "hyper connections" within the neural network. Early attempts to build very deep neural networks ran into the problem of vanishing gradients: the signal used to train the network would weaken as it passed through successive layers effectively preventing the network from learning. Then came "residual connections," a clever architectural trick which allowed the training signal to bypass layers which made it possible to train much deeper networks. Hyper connections are a further refinement of this idea, yet they seem to introduce new problems. The DeepSeek team has proposed a method which they call Manifold Constrained Hyper Connections to retain the benefits of hyper connections while avoiding instability. The size and complexity of these models is directly related to their capabilities. If you can't reliably train a larger model, you hit a performance ceiling. This affects everything from the quality of search results to the accuracy of automated translation, to the plausibility of artificial intelligence-generated content. The ability to scale models efficiently is a key competitive advantage and any technique that allows for larger more stable models is valuable. As these systems grow more complex, the technical challenges of building and maintaining them also increase. It's rather like adding ever more lanes to a motorway: eventually the system becomes as a result complex that even minor disruptions can cause widespread gridlock. It seems that the DeepSeek team is attempting to avoid precisely that outcome. One might also observe that this relentless pursuit of ever-larger models raises the question of diminishing returns. At what point does the added complexity outweigh the benefits? Are we simply building bigger and bigger towers, not because they're fundamentally better, yet because we can? Perhaps the real innovation lies not in size, yet in efficiency. And speaking of efficiency, one of the challenges in building truly useful artificial intelligence systems is dealing with the limitations of memory. The next generation of artificial intelligence assistants the as a result-called "agentic" ones are running into a bit of a memory problem which is ironic given that memory is what computers are supposed to be good at. We're talking about artificial intelligence systems designed to handle more complex tasks, things beyond simple question-and-answer sessions. Think of them as digital assistants capable of planning, executing, and learning from their experiences. To do this effectively, they need to remember past interactions and decisions. The trouble is as these models grow larger and try to retain more information the cost of recalling that information goes up dramatically. It's a bit like trying to find a specific book in a library that's growing exponentially. This isn't just a technical issue for the engineers. It has real implications for businesses looking to use these advanced artificial intelligence systems. If the cost of accessing past data becomes too high, it could make these systems impractical for many real-world applications. Imagine a customer service artificial intelligence that forgets previous conversations, or a financial analysis tool that can't recall past market trends. The potential for inefficiency, and frankly, error, becomes quite significant. We're seeing a wider trend of artificial intelligence development pushing against existing infrastructure limits. Whether it's energy consumption data storage or in this case memory architecture the ambition of the models is bumping up against the realities of what's achievable and affordable. The promise of seamless intelligent workflows powered by these systems is seductive yet if the underlying memory architecture can't keep pace we may find ourselves drowning in data unable to extract the insights we need. One is reminded of that old line about computers solving problems we didn't know we had, in ways we don't understand. And there's also been a development in language model design that might address some of these inherent limitations we've seen as a result far. It's called a Recursive Language Model, or RLM. The core idea is that instead of feeding these large language models an entire prompt in one go the model treats the prompt more like an environment it can explore. Think of it as shifting from a single linear ingestion of data to a more dynamic interaction where the model can use code to selectively inspect and parse the information it needs. In practical terms this means the model can decide which parts of a lengthy prompt are most relevant and focus on those rather than trying to process everything at once. It's analogous to having a conversation with someone who can filter out the noise and concentrate on the key points. This approach could also allow models to handle longer conversations more effectively, maintaining context over extended interactions. A company called Prime Intellect has already demonstrated this with their RLMEnv showing how these models can be applied in scenarios requiring sustained dialogue and complex questions. If these recursive models can indeed improve the efficiency and accuracy of information retrieval it could change how we interact with artificial intelligence making it feel less like a brute-force data crunch and more like a thoughtful exchange. This is particularly relevant in an age of information overload, where the ability to filter and focus is paramount. It may be just a more elaborate workaround for the fundamental limitations of the underlying architecture. We've seen similar attempts to address context windows before, and the results have been… variable. Still, the move towards more selective and dynamic information processing is an interesting one, and may yet yield some surprising results. Finally, let's turn to the question of security. Because, as with any powerful technology, artificial intelligence also presents new risks. There's been some interesting work published on creating artificial intelligence systems that can test themselves for vulnerabilities. The method involves building what's called a "red team" of artificial intelligence agents specifically designed to generate malicious inputs like adversarial prompts and then launching these attacks against a target artificial intelligence. The target artificial intelligence is one hopes under some form of supervision and its responses are carefully analysed to see how well it withstands the assault. "Red teaming" itself isn't new; it's a common practice in cybersecurity. Yet applying it to artificial intelligence particularly to as a result-called "agentic" artificial intelligence systems that can use tools and make decisions autonomously introduces some new wrinkles. The main goal here is to catch problems like "prompt injection," where a malicious prompt can trick an artificial intelligence into doing something it shouldn't or "tool misuse," where the artificial intelligence misuses the tools it has access to. The importance of this kind of self-testing becomes clearer when you consider how quickly artificial intelligence is being integrated into all sorts of systems from customer service to financial trading. A vulnerability in any one of these systems could have serious consequences ranging from data breaches to financial Well another week another deluge of artificial intelligence developments. It's important to sift through the noise, isn't it? If you'd like a daily digest of the important artificial intelligence news without the froth you can sign up for my newsletter at jonathan-harris dot online. And if you're looking for a more in-depth exploration of the legal ramifications of artificial intelligence you might find my book "Artificial Intelligence and the Law: Case Studies and Future Trends" useful. It's available at books dot jonathan-harris dot online slash ai-law. It's for those who want understanding, not buzzwords. That's it for this week's Turing's Torch. Keep the flame burning, stay curious, and I'll see you next week with more artificial intelligence insights that matter. I'm Jonathan Harris—keep building the future.