Good afternoon. As the torrential rain shower in London reminds us of the unpredictable nature of the world we turn our attention to a subject that has sparked debate and curiosity: artificial intelligence. Alan Turing once remarked "The original question 'Can machines think?' I believe to be too meaningless to deserve discussion." This statement highlights our mission today—to demystify the complex landscape of artificial intelligence cutting through the noise and focusing on what truly matters. Tired of drowning in artificial intelligence headlines? Ready for clarity, insight, and a direct line to the pulse of innovation? Welcome to Turing's Torch: artificial intelligence Weekly! I'm Jonathan Harris your host and I'm cutting through the noise to bring you the most critical artificial intelligence developments explained analysed and delivered straight to you. Let's ignite your understanding of artificial intelligence, together. OpenAI, the name behind ChatGPT, is assembling a team of consultants. Not just tech explainers yet people who can translate the intricacies of artificial intelligence into something a boardroom can understand and more importantly be persuaded to invest in. It's not enough to have clever technology, it turns out. Corporations often struggle to integrate disruptive technologies, and artificial intelligence certainly qualifies. These consultants will act as intermediaries, explaining the benefits, customising solutions, and generally greasing the wheels of adoption. The significance of this is twofold. First, it highlights the pressure on artificial intelligence companies to actually generate revenue, not just headlines. OpenAI has a stated aim of reaching a hundred billion dollars in revenue within a few years and that kind of target focuses minds. Second, it underscores the challenges of selling artificial intelligence to established businesses. It's not enough to have a superior product; you need to convince people it's worth the investment and the inevitable disruption. This push reflects a wider trend: the artificial intelligence industry is waking. up to the fact that implementation is just as important as innovation. It's easy to get swept up in the possibilities, yet somebody has to actually make it work. And that's where the consultants come in, presumably with PowerPoint decks in hand. One wonders, of course, whether this is simply a case of throwing more bodies at the problem. Consultants are not always cheap and it remains to be seen whether they can truly bridge the gap between technological possibility and corporate reality. Perhaps a more fundamental question is whether these corporations actually need the level of artificial intelligence integration being offered or whether it's simply a solution in search of a problem. The lure of being seen as cutting-edge is powerful, even if the cutting edge is slightly blunt. Meanwhile, Microsoft researchers have announced a new method for finding hidden backdoors in large language models. Think of it as a digital smoke detector designed to sniff out malicious code that lies dormant waiting for a specific trigger to activate. The jargon here revolves around "poisoned models" and "sleeper agents." In essence someone could intentionally introduce subtle flaws into these language models during their development. These flaws act like trapdoors, allowing an attacker to later manipulate the model's behaviour in unpredictable ways. It's a supply chain risk: if you incorporate a pre-trained model from elsewhere, you're trusting that it hasn't been tampered with. Now, the more we rely on these language models for critical tasks, the more vulnerable we become to sabotage. Imagine a language model used in a bank's fraud detection system. A sleeper agent could be activated to ignore fraudulent transactions under certain conditions, potentially costing the bank millions. Or consider a model used in a self-driving car – a backdoor could be exploited to cause an accident. The ability to detect these threats without knowing what to look for is a significant step forward in securing these systems. This also speaks to the broader issue of transparency and trust in artificial intelligence. As these models become more complex, it becomes harder to understand how they work and whether they are behaving as intended. This new tool from Microsoft offers a way to peek under the hood yet it also raises questions about who should be responsible for ensuring the safety and security of these models. It also underlines a point worth dwelling on: security often arrives after the fact. The fact that such backdoors are possible at all and that detecting them requires novel techniques suggests that the initial focus on rapid development may have come at the expense of robust security considerations. It's a bit like building a house and only thinking about the locks after someone's already broken in. Let's hope future development of artificial intelligence systems prioritises defence rather than assuming it. For now it seems that this new scanning technique offers a crucial layer of protection in a world increasingly reliant on artificial intelligence and we should perhaps be thankful for small mercies. That brings us to the practical realities of implementing artificial intelligence. A recent artificial intelligence conference in London highlighted a growing tension between. the promise of artificial intelligence and the reality of putting it to work. It seems the initial excitement surrounding flashy new models is giving way to a more. sober assessment of what it actually takes to integrate artificial intelligence into existing business operations. When we talk about "integrating artificial intelligence," we're not just talking about plugging in a piece of software. We're talking about fundamentally changing how a business operates. This involves everything from updating legacy computer systems to managing vast. amounts of data and training staff to use new artificial intelligence-powered tools. Many companies are discovering that their existing infrastructure simply isn't up to the task and that the cost and complexity of upgrading can be significant. This matters because the companies that can successfully navigate this transition stand to gain a significant competitive advantage. artificial intelligence has the potential to automate tasks, improve decision-making, and create new products and services. Yet those who can't overcome the practical challenges risk falling behind. We've seen this pattern before, of course, with earlier waves of technological change. This also speaks to the broader conversation around the role of artificial intelligence in the economy. While there's a lot of focus on the potential for artificial intelligence to displace jobs the conference seemed to highlight the skills gap that exists right now. Companies need people who can understand how artificial intelligence works how to manage the data it relies on and how to use it to solve real-world problems. It is not enough to simply have the tools: you need the people to use them. It's interesting how quickly the narrative around artificial intelligence has shifted. It was only a short time ago that we were being told artificial intelligence was going to solve all our problems. Now, the conversation is much more nuanced, and perhaps more realistic. One might even say, dare I suggest, that the field is finally growing up. And that, I suspect, is probably a good thing. Some rather large companies are now experimenting with artificial intelligence in a more ambitious way than before moving beyond simple chatbots to actual artificial intelligence agents that perform real work inside their existing business systems. These aren't just tools to answer basic questions. We're talking about systems that could, in theory, handle significant responsibilities that previously required a human employee. OpenAI, for example, has launched a platform designed to make it easier for businesses to deploy these artificial intelligence agents. The idea is that these agents can automate tasks, boosting productivity and efficiency. The potential impact is considerable. If successful, we could see a fundamental shift in how businesses operate, with artificial intelligence handling an increasing amount of work. This naturally raises concerns about the future of employment. While there's excitement about the potential for increased efficiency and of course increased profits there's also a very real anxiety about job displacement and the changing nature of work. Employees will likely need to adapt to a workplace where artificial intelligence. is not just an assistant yet a colleague capable of executing tasks independently. It's not a completely new conversation, of course. Automation has been with us for decades. Yet the speed and scope of these new artificial intelligence systems is what makes this different. How businesses will balance the benefits of artificial intelligence integration with. the ethical considerations surrounding employment is a question that isn't going away. One wonders if some of these companies are as a result keen to be seen. as innovative that they're rushing into this without fully considering the potential downsides. After all a perfectly optimised artificial intelligence-driven company that's also deeply unpopular with its workforce might not be a recipe for long-term success. For the time being, it will be interesting to see how these trials progress. We are, after all, the test subjects. And this leads to a related point: reliability. There's been some chatter about improving the reliability of artificial intelligence systems by separating the underlying logic from the search process. In essence, this involves designing artificial intelligence agents where the core reasoning is distinct from the mechanisms used to find solutions. Think of it like this: the logic is the recipe and the search is the method used to find the ingredients and cook the dish. If you separate them, you can change your shopping strategy without altering the recipe itself. In artificial intelligence terms, it means you can adjust how the system explores possibilities without messing up its fundamental reasoning abilities. Why does this matter? Well, large language models, for example, can be a bit temperamental. A prompt that works perfectly one day might produce nonsense the next. This unpredictability makes it difficult to deploy these systems in real-world applications where consistency is crucial. By decoupling logic and search, developers can build more robust systems that are less prone to these kinds of errors. It allows for easier experimentation and refinement, adapting more quickly to evolving business needs. The goal is to make artificial intelligence more dependable as it becomes further integrated into everyday operations. The conversation around artificial intelligence reliability is not new, of course. We have seen quite a bit of work recently aimed at predictability and explainability, which are different sides of the same coin. The idea of modularity is helpful, yet it does add complexity. One might suggest that if your artificial intelligence is as a result unreliable that it needs this kind of architectural intervention perhaps you ought to have a look at the underlying artificial intelligence model itself. For now, the emphasis on making artificial intelligence more stable and predictable is a welcome, if somewhat belated, development. And it's not just about the software, it's about how artificial intelligence interacts with the physical world. Robots are getting better at understanding the space around them, and that's largely down to improvements in how we label spatial data. This is becoming increasingly important as robots move into more complex roles. Spatial data annotation, in essence, is the process of teaching a robot what it's seeing. Think of it as adding tags to the world. A drone needs to know that's a tree, that's a building, that's a person. An autonomous vehicle needs to differentiate between a lane marking and a pedestrian. A surgical robot needs to identify tissue types with absolute precision. The more accurate and detailed the annotations, the better the robot can perform its task. The applications are wide-ranging and impactful. We're talking about safer autonomous vehicles, more efficient delivery drones, and more precise surgical procedures. The accuracy of these systems hinges on the quality of the data they're trained on. If the data is poorly annotated, the robot will make mistakes, and in some cases, those mistakes can have serious consequences. This is especially true in situations where robots are working alongside humans. This also ties into a larger issue about the safety and. reliability of these systems as they become more integrated into our lives. There's a real need for careful oversight and robust testing to ensure that these robots are functioning as intended. If a self-driving car misinterprets sensor data because of poor spatial annotation, the results could be catastrophic. The challenge, of course, is scaling up the annotation process to meet the growing demands of the robotics industry. As the applications become more diverse, the tools and methodologies for data annotation need to evolve accordingly. It's not enough to just capture the data it needs to be done in a way that's consistent reliable and useful for the machine learning models that will power the next generation of robots. One could be forgiven for wondering whether the humans training the machines are up to the task. It's a reminder that even the most advanced robotics rely on good old-fashioned human input. And speaking of humans there's a new application on the market that animates still images and allows you to converse with simulated characters. It's called ASHK. The idea is that you upload a photograph or create a digital image within the application then select from a range of animation styles. The software then generates a short video clip based on your choices. In addition to this ASHK offers a chat platform where you can interact with various artificial intelligence personalities ranging from anime figures to representations of historical figures. You can switch between these characters and have simulated conversations with them. This matters because it represents a further blurring of the lines between content creation and social interaction. The promise is that anyone regardless of their technical skill can produce engaging video content and have simulated interactions with artificial intelligence personalities. The business model is not yet fully clear yet the appeal is obvious: a blend of creative tools and social engagement all within a single application. One wonders, of course, whether this is simply another example of technology offering a superficial imitation of human creativity and connection. We may find ourselves surrounded by highly polished, easily generated content that ultimately lacks depth or genuine emotional resonance. Perhaps the true innovation lies not in simulating creativity yet in fostering it. And on that note, I shall leave you to your thoughts. There's also a new chatbot in the market called Sakura and it seems to be pitching itself as a more… let's say adult conversationalist. What that means, in practice, is that it's designed to engage in dialogues that other chatbots might politely decline. The marketing emphasizes a more fluid and natural interaction, prioritising user intent over rigidly scripted responses. Apparently it also remembers past conversations to provide a sense of continuity which is a nice touch if you're looking for that sort of thing. Now, why does this matter? Well it points to a growing market for artificial intelligence companions that can handle a wider range of topics including those that might be considered taboo or sensitive. It also raises questions about the ethical boundaries of artificial intelligence and the potential. for these technologies to be used in ways that could be harmful or exploitative. We've seen how easily people can form attachments to these systems, and the potential for manipulation is certainly there. This fits into a broader trend of artificial intelligence becoming more personalized and integrated into our daily lives. The question is, are we ready for artificial intelligence that is not just intelligent, yet also, shall we say, adventurous? It seems inevitable that these boundaries will continue to be tested. It's all part of the ongoing experiment of what we are prepared to let machines do for us, or perhaps, to us. The fact that this is even a thing suggests some people are lonely, or perhaps just easily amused. Of course, not everyone is thrilled about the relentless march of artificial intelligence. Mozilla the organisation behind the Firefox web browser is introducing a very simple way for users to completely disable all artificial intelligence features within the browser. This includes turning off existing artificial intelligence tools as well as preventing any new ones from being automatically switched on in future updates. Essentially, they're adding an "off" switch for artificial intelligence. It's a control that stops the browser from prompting you to use these features, or from just activating them without explicit permission. In practice it means that if you don't want artificial intelligence summarising web pages or suggesting search terms or whatever else they might add down the line you can just say as a result and the browser will respect that. Now, why does this matter? Well, it's a question of control. Most technology companies seem determined to integrate artificial intelligence into everything, whether users want it or not. This move by Mozilla is somewhat unusual because it acknowledges that not everyone wants these features and gives them a simple way to opt out. This is increasingly important as more and more software becomes laden with artificial intelligence functionality that many find intrusive or simply unnecessary. The broader trend here is about user agency. We're seeing a growing pushback against the relentless integration of artificial intelligence into every aspect of our digital lives. People are beginning to question whether these features are genuinely useful or just another way for tech companies to gather data and control our online experience. Of course one could argue that if the artificial intelligence features were genuinely useful people wouldn't want to turn them off in the first place. Perhaps the real issue isn't the lack of an "off" switch yet the lack of a compelling reason to switch them on at all. That said, it is a welcome step toward user empowerment in a world where such empowerment is becoming increasingly rare. All of this activity does, of course, attract the attention of the big money. There was a rumour and for a short while a rather persistent one that Nvidia was about to invest an astonishing one hundred billion dollars in OpenAI the company responsible for ChatGPT. Now, that figure is as a result large it almost ceases to have any real meaning. To put it in perspective, it's roughly the GDP of Ecuador. Nvidia, for those unfamiliar, designs and sells graphics processing units. These GPUs are essential for training the large language models that underpin much of the current artificial intelligence boom. OpenAI, of course, is the creator of one of the most widely used and talked-about artificial intelligence systems. A merger of this scale would have been, well, seismic. The fact that it didn't happen, that it appears to have been largely smoke and mirrors, is significant. It's a reminder that even in the frenzied atmosphere surrounding artificial intelligence, not everything reported is necessarily true. It speaks to the power dynamics at play. Nvidia's chips are effectively the picks and shovels of this particular gold rush. To have them tied as a result closely to one specific artificial intelligence company would have given OpenAI a considerable advantage potentially stifling competition and concentrating even more power in the hands of a few very large players. It also would have left Nvidia somewhat exposed, hitched to one horse in a very crowded race. We've seen this pattern before, of course. Grand pronouncements, inflated valuations, and a general sense of breathless optimism. It's tempting to think that this non-deal might inject a dose of realism into the market a reminder that due diligence and a healthy dose of scepticism are still advisable. Yet I suspect that the lure of quick returns is too strong, and the next improbable rumour is likely already gaining momentum. As a result, we must continue to observe these developments with a clear head and a critical eye. And what are these developments, exactly? Well, there's been a peculiar turn of events in the ongoing discussion about artificial intelligence and employment. It appears that artificial intelligence systems are now being used not just to replace human workers, yet to hire them. Instead of simply automating tasks these artificial intelligence agents are being given the power to assess resumes conduct interviews and even make final hiring decisions. What this really means is that algorithms are now determining who gets a job, based on data analysis and pre-defined criteria. It's a shift from worrying about machines taking our jobs to machines deciding who gets them in the first place. The implications are considerable. Businesses are hoping for greater efficiency and a streamlined recruitment process. An artificial intelligence can certainly sift through countless applications far faster than any human. Yet there's the question of whether an artificial intelligence can truly gauge soft skills creativity or cultural fit – those intangible qualities that are often crucial to a successful hire. And, of course, there's the potential for bias. If these algorithms are trained on flawed or incomplete data they could easily perpetuate existing inequalities in hiring practices which is hardly a step forward. This development seems to reflect a broader trend of entrusting ever more decisions to automated systems often with the promise of objectivity and efficiency. Yet it also raises questions about transparency and accountability. If an artificial intelligence rejects a candidate, how do they know why? And who is responsible if the system makes a discriminatory decision? One might be forgiven for thinking that entrusting something as fundamental. as hiring decisions to a machine smacks of a certain managerial laziness. Perhaps it's simply easier to delegate these difficult decisions to an algorithm than to grapple with the messy realities of human potential. And as a result, we continue to find ourselves in uncharted territory. It is worth noting that researchers have been exploring ways to train artificial intelligence agents for safety-critical tasks using only pre-existing data rather than allowing the artificial intelligence to learn by trial and error in the real world. This involves feeding the artificial intelligence a dataset of past behaviours essentially showing it how to act in a safe and controlled manner. The system then learns from this data without needing to experiment and Another week another deluge of developments in the world of artificial intelligence. Finding a signal amidst the noise has never been more important. If you'd like a daily digest of key artificial intelligence news without the breathless pronouncements you can sign up for my newsletter at jonathan-harris.online. And for those seeking a more comprehensive understanding of artificial intelligence's potential impact my book "The Future of Government: Leveraging artificial intelligence to Enhance Services," is available at books.jonathan-harris.online/ai-government. It offers a considered look at the subject, free from hype and focused on practical application. That's it for this week's Turing's Torch. Keep the flame burning, stay curious, and I'll see you next week with more artificial intelligence insights that matter. I'm Jonathan Harris—keep building the future.