By now you've probably spotted headlines hyping-up 'ChatGPT', a new artificial intelligent computerised software program that kind of thinks and takes decisions for you and me. It has academics and cybersecurity experts worrying about a threat in swamping social media and in turn our minds with fake accounts and misinformation. Yet this is nothing new.
A root about the historical background reveals such manipulative practices, where their antecedents can be ascribed to a unique group of Bletchley Park codebreakers. They were well-versed in taking nothing as read and the crucial work involved one particular outstanding individual subsequently instrumental in establishing Scotland's early contribution to AI and robotics.
Wind back exactly 60 years and a University of Edinburgh fledgling department of AI origins can be traced to a small research group at 4 Hope Park Square. It was headed by one Donald Michie, then reader in surgical science. During World War II, he was a member of Max Newman's group at Bletchley and became a personal friend of colleague Alan Turing. Michie had been introduced to computing and believed in the possibility of building machines that could think and learn.
By the early 1960s, the time appeared to be ripe to embark on such a scientific adventure. An experimental programming unit was established at Edinburgh and he became director of the new department of machine intelligence and perception, persuading senior associates at Cambridge University to coordinate and set up a 'brain research institute'.
Michie had worked for the government's code and cypher school at Bletchley, contributing to the efforts to solve 'Tunny', a German teleprinter cipher that cryptanalysis identified as sending its messages in binary code, packets of zeroes and ones, resembling code used in present-day computers. Tunny was the crypto-jackpot as messages included those sent not only by the Nazi Berlin army high command but Hitler himself. The code was broken in 1942. It was soon realised that Tunny rivalled or even exceeded Enigma in importance.
Displaying a natural aptitude for cryptography, Michie worked right in the thick of it with Turing, Newman and Good. After the war, he studied at Balliol College, Oxford, and once in Edinburgh he remained for the next two decades eventually leaving to, fittingly, found The Turing Institute in Glasgow. He was also deeply involved with UK charity, The Human Computer Learning Foundation, continuing to collaborate with others on natural language systems and theories of intelligence. A fellow of the Royal Society of Edinburgh, he was also a foreign honorary member of the American Academy of Arts and Sciences and fellow of the British Computer Society.
Edinburgh research in robotics produced FREDDY II, capable of assembling objects automatically from a heap of parts. However, as often happens, real-life intellectual disagreements over the nature and aims of AI stalled further development at that time. What became a lengthy disharmony eventually saw machine intelligence research established to accommodate Michie's work. Over the next decade, projects were dominated by the areas of automated reasoning, cognitive modelling, childrens' learning and computational theory.
A first-time joint degree covering linguistics with AI was launched, developing into a PhD programme in cognitive science. From such modest beginnings, the centre for cognitive science emerged in 1985 in tandem with collaborative projects covering automated assembly, unmanned vehicles and machine vision. Vital areas that together with 3D geometric object representation are collectively very much to the fore today.
The start of a further decade of AI activity coincided with the publication of Realising our Potential
, the government's new strategy for harnessing the strengths of science and engineering to wealth creation. By then, it was clear from a relatively early stage in development of AI at Edinburgh that there was a strong interest in putting such technology to work outside the laboratory.
AI commercial interest in the early 1980s exploded into life and such was the reputation by then of the university department that, despite its modest size, it was bombarded with requests from UK companies for various kinds of technical assistance. A separate non-profit making organisation supporting vital research and development (R&D) applications was set up.
Afterwards and during Michie's time at The Turing Institute, the AI laboratory in Glasgow undertook basic and applied research, working directly with large companies across Europe, the United States and Japan, developing software along with providing training, consultancy and information services. Companies including IBM, Burroughs, British Airways, Shell and Unilever seconded researchers to develop new industrial AI applications.
Among key projects, one involved working under contract for Radian Corp to develop code for the Space Shuttle auto-lander 'Rulemaster', using training examples from a NASA simulator. Although it's hard to believe, The Turing Institute faced financial difficulties and closed in 1994, despite holding a global reputation for excellence. Such was the derisory funding towards vital research work despite global recognition.
Thomson Reuters Labs lead designer, Milda Norkute, says it is clear AI practitioners should focus more on explaining their solutions at the very start of the development process. The user must be placed front and centre to prevent potential misuse of data and AI itself. AI is increasingly being integrated into various sectors and makes decisions that impact people's lives. They need to understand how a system 'arrives at its conclusions and recommendations'. This approach must apply to chatbots.
My journalistic colleague David Silverberg at the BBC says that whether it's cookery advice or help with speech, ChatGPT represents the first opportunity for many to play with an AI system. Developers OpenAI report that this particular system has been 'trained' using text databases from the internet, books, magazines and Wikipedia. In all, 300 billion words are involved. He warns the chatbot can seem 'eerily human' and can be a powerful tool for those up to no good.
A multi-trillion market is up for grabs in what has become known as technology's 'holy grail' AI and machine learning. Online chat(bot) conversations via text-to-speech or just text, are computer programs used as an alternative to human-to-human contact. They're being used increasingly by social media outlets and consumers along with business and commerce. Techies describe the development of generative AI software tools as 'transformative' on a scale similar to the emergence of the internet.
Microsoft has integrated OpenAI's latest model ChatGPT into its struggling Bing search engine and Edge web browser. It hopes to see massive revenue earners from sectors including entertainment, health, education, finance, e-commerce, news and politics. You name it. Many chatbots are already employed by businesses run on your mobile's messaging apps or via SMS commonly used for business-to-commerce (B2C) customer service, sales and marketing. You know the sort of thing: a conversational online/mobile (ro)bot tells you: 'Your call is special to us,' then you're left for 15 minutes listening to the Supremes singing You Keep Me Hangin' On
When it comes to ChatGPT, academics, cybersecurity researchers and AI experts have singled out this generation of chatbots from previous incarnations, collectively warning that they could be used by bad actors on social media 'to sow dissent and spread propaganda', terms that sound eerily familiar to a past age. The difference is, this time around, we cannot expect a group of Bletchley codebreakers to come to our rescue.
Former Reuters, Sunday Times, The Scotsman and Glasgow Herald business and finance correspondent, Bill Magee is a columnist writing tech-based articles for Daily Business, Institute of Directors, Edinburgh Chamber and occasionally The Times' 'Thunderer'