Good day, listeners. As a light rain shower drifts over London we turn our attention to a subject that much like the weather often feels unpredictable and overwhelming: artificial intelligence. Today we reflect on Alan Turing's assertion that "It is possible to invent a single machine which can be used to compute any computable sequence." This encapsulates our mission here—to demystify the complexities of artificial intelligence and illuminate its potential. In a landscape cluttered with jargon and sensationalism we aim to distill the essence of artificial intelligence providing you with clarity and insight. Tired of drowning in artificial intelligence headlines? Ready for clarity, insight, and a direct line to the pulse of innovation? Welcome to Turing's Torch: artificial intelligence Weekly! I'm Jonathan Harris your host and I'm cutting through the noise to bring you the most critical artificial intelligence developments explained analysed and delivered straight to you. Let's ignite your understanding of artificial intelligence, together. There's been a lot of talk recently about artificial intelligence creeping into just about every corner of our lives and that includes some rather unexpected places. We're going to spend some time today looking at a few of those. Let's start with something that sounds almost deliberately dull: enterprise risk management. Now enterprise risk management or ERM as nobody calls it is essentially the process of identifying assessing and controlling threats to an organisation's capital and earnings. Traditionally, this has involved periodic reviews and static reports. The suggestion now is that artificial intelligence can help companies move beyond simply reacting to problems after they occur and instead proactively anticipate and mitigate risks in real time. Think of it as a sophisticated early warning system for businesses. The claim is that artificial intelligence can improve this by continuously analysing. diverse data sources to identify patterns and predict potential problems before they escalate. The potential impact is significant, of course. Companies that can effectively manage risk have a clear competitive advantage. They're less likely to suffer financial losses, reputational damage, or regulatory penalties. Moreover in an increasingly complex and interconnected world the ability to understand and respond to risks in real time is becoming essential for survival. This also touches on the broader theme of governance and oversight as artificial intelligence-driven risk management tools require careful monitoring to ensure they are accurate fair and aligned with the organisation's values. Financial institutions, in general, are becoming increasingly keen to let artificial intelligence systems make actual decisions, rather than simply assisting human beings. We're told that after an initial period of experimentation, the emphasis is shifting towards integrating artificial intelligence into core operations. We're moving beyond artificial intelligence simply generating content or improving specific workflows. The goal now is to have artificial intelligence agents that can independently analyse data identify trends and make real-time recommendations without constant human intervention. The promise is improvements in efficiency and the quality of decision-making, particularly in areas like risk management and customer service. DBS, the Singaporean banking group, has even begun experimenting with artificial intelligence to handle customer payments directly. They're working with Visa on a pilot program that would allow artificial intelligence agents to complete transactions on a customer's behalf. Up to now, artificial intelligence in banking has mostly been about providing advice, offering recommendations, perhaps flagging unusual activity. This is a step further. Think of it as an automated personal shopper that not only suggests items you might like yet also buys them for you without further input. In practice this means granting an artificial intelligence agent access to your payment methods and authorising it to make purchases based on pre-set parameters or instructions. This is a trend we've seen in other sectors, a willingness to delegate increasingly complex tasks to artificial intelligence. The hope is that these systems will augment human capabilities yet the risk is that they will become black boxes whose reasoning is opaque. It's worth remembering that financial institutions are not exactly known for their humility and the notion that an algorithm no matter how sophisticated can consistently outperform human judgment in complex financial matters seems at best optimistic. Perhaps they should focus less on emulating human decision-making and more on. simply avoiding the kind of reckless behaviour that has historically plagued the sector. Of course, one should suggest a degree of caution is warranted. While artificial intelligence can undoubtedly enhance risk management processes, it's not a magic bullet. The quality of the data fed into these systems is critical and there's always the risk of bias or errors leading to flawed predictions. Furthermore over-reliance on artificial intelligence could lead to a decline in human oversight and critical thinking which are still essential for effective risk management. In short artificial intelligence can be a powerful tool yet it's only as good as the data and the people who use it. Corporate treasury management, that rather unglamorous yet essential function of keeping a company solvent, is also apparently being infiltrated by artificial intelligence. We're told that companies are moving away from spreadsheets and towards automated systems powered by artificial intelligence. What this really means is that software is being used to automate tasks such as cash flow forecasting risk management and compliance reporting. Instead of a human laboriously entering data and running calculations, an algorithm does it. The promise, of course, is greater efficiency, accuracy, and speed. This fits into a broader pattern we've seen across various sectors: the automation of routine tasks previously performed by white-collar workers. The argument is always that it frees up humans to focus on higher-value activities yet one wonders if the definition of "higher-value" always aligns with the interests of those whose jobs are being automated. One might also ask whether trusting algorithms with critical financial decisions is entirely wise especially when the underlying models are often opaque and the potential for unintended consequences exists. After all as any seasoned treasury manager knows sometimes a spreadsheet and a healthy dose of human judgement is exactly what's needed to steer a company through uncertain waters. That said, the direction of travel seems clear, and those who resist the tide of automation may find themselves paddling upstream. And we are seeing a shift in how businesses are approaching artificial intelligence more generally. It's no longer enough to simply have artificial intelligence; companies are now under pressure to demonstrate a clear return on their investments. What this means is that the early days of experimentation and flashy demonstrations are fading. Executives are now demanding hard evidence that artificial intelligence projects are actually improving the bottom line. Companies are realising that getting the technology to work is only half the battle. The real challenge is translating that technology into measurable outcomes, such as increased revenue, improved efficiency, or happier customers. This requires a more systematic approach, integrating key performance indicators that directly link artificial intelligence initiatives to business goals. The stakes are high. With boards increasingly demanding accountability companies need to effectively communicate the value of their artificial intelligence investments in terms that resonate with shareholders and customers. Those who can't demonstrate a clear return on investment may find their artificial intelligence budgets shrinking. This also has implications for the wider economy. If artificial intelligence investments fail to deliver tangible results it could dampen enthusiasm for the technology and slow its adoption across various industries. This focus on measurable value is a natural progression. In the early days of any new technology, there's often a period of hype and experimentation. Yet eventually, the rubber meets the road, and businesses need to justify their investments. What remains to be seen is whether this shift will lead to more responsible and effective use of artificial intelligence or whether it will simply stifle innovation and lead to a more cautious approach to the technology. Perhaps a bit of both, as as a result often. Moving away from the financial sector, for a moment, let's consider something completely different: sports. The business of sport is now attempting to keep fans engaged all year round, not just during the season. This involves using artificial intelligence in messaging apps to maintain a continuous dialogue. Essentially, sports organisations are trying to move away from being event-driven and become always-on platforms. The idea is that by collecting and analysing data about fans they can tailor communications and create a more personalised experience that resonates even when there are no games to watch. This means automated responses and dynamic interactions through messaging, keeping fans constantly connected. The motivation is clear: teams want to cultivate a loyal fan base that supports them season after season. It is a shift from simply providing entertainment to building a kind of continuous relationship, mediated by technology. It also puts pressure on teams to adapt not just technologically yet also culturally to embrace these new ways of interacting with their audience. The unified fan data becomes critical: organisations aim to tailor interactions, creating personalised experiences that extend beyond match days. One can see how this fits into a wider trend, though, doesn't one? The desire to quantify and monetise every aspect of our lives. Sport, like everything else, is being subjected to the relentless logic of data-driven engagement. The question is will this constant digital interaction actually enhance the fan experience or will it simply become another form of intrusive marketing? I suspect that, before long, many fans will be clamouring for a bit of peace and quiet. And perhaps the greatest irony here is that in the quest for ever-greater engagement the human element the very thing that makes sport as a result compelling might be the first casualty. Speaking of sport there's a new artificial intelligence tool that purports to predict the outcomes of cricket matches specifically focusing on the upcoming T20 World Cup. Essentially, this is a prediction engine. It takes in live data historical performance statistics weather conditions all the usual variables and then spits out a projected winner for a given match. The system is built using readily available artificial intelligence tools and is designed to be user-friendly: enter the date and you get the predictions. The implications here are largely about engagement. Sports thrive on speculation and debate and if an artificial intelligence can provide seemingly objective insights it could alter how fans interact with the game. Think of it as an attempt to inject a dose of. data-driven analysis into what is often a very emotional and unpredictable arena. We've seen similar efforts in other sports, as a result it's a natural progression, particularly given the increased interest in sports analytics more generally. Of course, cricket, like any sport, is notoriously susceptible to the unexpected. A sudden downpour a dropped catch or a moment of individual brilliance can completely change the course of a match rendering even the most sophisticated models useless. As a result the question becomes: can an artificial intelligence truly account for the inherent chaos of the game or will it simply reinforce existing biases and assumptions? It may be a useful tool, yet I suspect it will not replace the commentator's colourful language anytime soon. That artificial intelligence may well be more accurate than my own predictions, which are generally terrible. Let's move on to something completely different. Researchers in Alaska have developed an artificial intelligence system capable of identifying individual brown bears. This isn't just about counting bears; it's about recognising them, one from another, consistently over time. The system, dubbed PoseSwin, uses image analysis to learn from a large collection of bear photographs. It can then identify individual bears even if they've gained weight for winter or lost it again or are simply seen from different angles. Think of it as facial recognition, yet for bears. The implications for wildlife management are considerable. Conservation efforts often rely on understanding population dynamics, migration patterns, and individual behaviours. Knowing precisely which bear is where, and when, provides valuable data for informed decision-making. This is especially relevant in the face of climate change and habitat loss where tracking individual animals can reveal how they're adapting – or failing to adapt – to changing conditions. This also highlights a growing trend: the application of artificial intelligence to environmental monitoring. From tracking deforestation to analysing ocean currents, these technologies offer the potential to gather and process data at scales previously unimaginable. It allows for a far more granular understanding of our impact on the planet. That said, a healthy dose of scepticism is warranted. While the ability to track individual bears is undoubtedly impressive, it's worth remembering that technology alone cannot solve complex environmental problems. Identifying a problem is one thing addressing it requires policy changes resource allocation and perhaps most importantly a willingness to change our own behaviours. Knowing exactly which bear is struggling doesn't help if we continue to degrade its habitat. Agricultural robots are becoming more capable, too, thanks to a process called data annotation. Essentially this is the painstaking work of labelling vast amounts of data as a result that the robots can understand what they're seeing in the field. Think of it as teaching a child the difference between a tomato plant and a weed. Someone or something needs to meticulously tag images and sensor data identifying crops pests terrains and all the other elements of a farm environment. Without this annotated data, the robots are essentially blind, unable to distinguish between what needs nurturing and what needs eliminating. The implications are significant as farms are increasingly relying on robotic precision to optimise yields and reduce waste. If the data is poor, the robots will make mistakes, potentially damaging crops or misapplying treatments. This is especially relevant in the context of precision farming, where every plant counts and the margin for error is shrinking. In effect, the quality of the data directly impacts the efficiency and profitability of the farm. We're also seeing the rise of data annotation as a significant business in itself, as it requires specialized expertise and infrastructure. It's a reminder that the development of artificial intelligence systems isn't just about algorithms and code yet also about the often-overlooked human element of data preparation. One could argue that the romantic vision of the self-sufficient autonomous robot farmer is somewhat undermined by the army of human annotators working behind the scenes to make it all possible. As a result while we marvel at the technological prowess of agricultural robots it's worth remembering that their intelligence is at least for now heavily dependent on the quality of the data they are fed. Now, let's pivot to something a bit more personal. There's a new chatbot on the market called MeMe artificial intelligence and it's pitching itself as more of a digital companion than a simple information provider. The idea is that it learns and adapts to your individual communication style offering a more natural and flexible interaction compared to the often rigid experience of standard chatbots. The sales pitch emphasizes text-based exchanges, avoiding formats like phone calls that some users find intrusive. It also claims to tailor its suggestions based on the evolving context of your conversations. Pricing follows a tiered model, allowing users to select a subscription level that aligns with their budget and engagement needs. Now, the real question is whether this promise of a personalized artificial intelligence companion is actually achievable. In practice, "learning your style" likely means analysing your language patterns, preferred topics, and response times. The goal is to anticipate your needs and tailor its responses accordingly. For a business, this might translate to improved customer service or more engaging marketing campaigns. For individuals, it might offer a more satisfying and less frustrating experience when seeking information or assistance. Of course the more data you feed into such a system the more it knows about you raising familiar questions about privacy and data security. Is the convenience of a personalized artificial intelligence companion worth the potential risk of your data being misused or exposed? And, perhaps more cynically, will this 'companion' still be quite as a result attentive once the initial subscription period ends? It all sounds rather too good to be true, doesn't it? Similarly, there's another new chatbot app called Dream Mate, which aims to offer a more engaging and personalised interaction. It's designed to feel less like a programmed response and more like a genuine conversation. Now, the promise of a chatbot that actually understands and adapts to your conversational style is, of course, the holy grail. Most of these things tend to funnel you down pre-determined paths, like some sort of digital choose-your-own-adventure. This one claims to respect personal boundaries, adapt to your preferred method of communication, and tailor its responses to your actual needs. It sounds rather less annoying than the average, at least in theory. The impact here assuming it works as advertised is that it could set a new standard for how we interact with artificial intelligence. If people genuinely prefer this more nuanced approach it could force other developers to move beyond simple transactional interactions and focus on building more empathetic and user-friendly interfaces. It also raises the stakes in terms of data privacy and the potential for manipulation, naturally. The more "personal" a chatbot becomes, the more data it collects, and the more easily it can influence your decisions. And it's worth noting that this push towards "genuine connection" with digital entities is part of a broader trend. We're seeing artificial intelligence marketed as a companion, a confidante, even a friend. This normalisation of emotional relationships with machines is something we should be paying attention to particularly as these technologies become more sophisticated and harder to distinguish from human interaction. One does wonder though how long it will be before Dream Mate starts suggesting self-help books or perhaps even trying to upsell you on a premium subscription for "enhanced emotional support." After all a genuine connection is a fine thing yet a profit margin is forever. On a more extreme end of this spectrum there has been a notable if quiet arrival of artificial intelligence systems designed to function as virtual girlfriends. It's not a product launch as a result much as a gradual seepage into the culture. Now, unpack that. We're not talking about simple chatbots that parrot pre-programmed responses. These are increasingly sophisticated language models, capable of generating surprisingly nuanced and continuous conversations. They remember details, feign empathy, and offer a semblance of companionship to the user. The appeal is obvious, particularly for those who experience social anxiety or isolation. Here is a non-judgmental listener, available at any hour, offering a risk-free environment for social interaction. It's a digital comfort blanket in an increasingly disconnected world. Yet this development is not without its unsettling implications. It raises fundamental questions about the nature of authenticity emotional connection and even what we consider to be "real." If someone finds solace in a simulated relationship is that a legitimate form of support or a worrying substitute for genuine human interaction? And what happens to our understanding of intimacy when it can be as a result easily manufactured? This ties into the broader conversation around the role of technology in our lives and the potential for it to both connect and isolate us. We seem to be increasingly willing to outsource aspects of our emotional lives to machines. One wonders if we will eventually find that the algorithm was never really listening in the first place yet simply aggregating and exploiting our data. Perhaps a more pressing question than whether artificial intelligence can love us is whether we are capable of loving it back or if we are simply projecting our own desires onto a blank screen. A lot to think about, indeed. Let's circle back to the more mundane end of the spectrum, though. There's a new mobile app called Picora that uses artificial intelligence to turn your photos into short, shareable videos. It promises desktop-level editing on your phone letting you cut clips add music and text apply filters and generally tart up your content for social media. Now, the idea of artificial intelligence-enhanced video creation isn't exactly new. What Picora does is allow you to upload a picture select an artificial intelligence video style and then it spits out a supposedly engaging animated video. The app is subscription-based, which is where things get interesting. The real question is whether this is actually worth paying for. There are already countless free or low-cost video editing options available. Is Picora truly offering something unique, or is it just repackaging existing features under the artificial intelligence banner? The risk is that users are paying for hype rather than genuinely useful tools. This fits into a broader trend of artificial intelligence being used to automate and simplify creative tasks. We see it in writing, image generation, and now video. The promise is always greater efficiency and accessibility, yet the reality often falls short. There's a trade-off between convenience and control, and between speed and originality. Ultimately, it feels like yet another case of technological solutionism, searching for a problem to solve. I'm not convinced that the world is crying out for more artificial intelligence-generated videos. Perhaps some are yet I suspect most users are just as happy with the existing range of filters and editing tools already available. And it's worth remembering that looking good on video is less about the tools you use and more about, well, you. Speaking of image generation there's a new image generator on the scene making waves by claiming to avoid content restrictions that plague its competitors. This means that unlike many popular artificial intelligence image platforms this one apparently allows users to generate a wider range of content without running into automated blocks or censorship. Think of it Well, another week, another torrent of announcements, breakthroughs, and pronouncements. Sifting the signal from the noise is more important than ever, isn't it? If you'd like a daily distillation of what truly matters in the world of artificial intelligence you can sign up for my daily briefing at jonathan dash harris dot online. One email, all substance, no fluff. And for those seeking a deeper understanding of the field's foundations my book "The Architects of artificial intelligence: Pioneers and Breakthroughs" is available at books dot jonathan dash harris dot online slash ai dash architects. A look at the people and ideas that built this field. Understanding, not buzzwords. That's it for this week's Turing's Torch. Keep the flame burning, stay curious, and I'll see you next week with more artificial intelligence insights that matter. I'm Jonathan Harris—keep building the future.