Musk introduced Grok, a generative AI (GenAI) software, to his social media platform Twitter/X. Software such as Grok allows users to generate text, speech and images by simply asking nicely.
Experts are expecting the widening use of generative AI to be a key feature of 2025.
Calum McDonald from the Scottish AI Alliance said: “They generate outputs based on patterns in the text, speech or image data they have been trained on.
“It’s kind of like how a chef, after tasting lots of recipes, could come up with their own ideas for dishes by combining the flavours and techniques they’ve learned.”
READ MORE: Elon Musk is obsessed with grooming gangs in the UK – here’s why
Programmes like these have existed for some time, but they have only recently broken into widespread use.
This is partly thanks to these programmes becoming free to use through a breadth of channels, but the important factor in their rise is the growth in quality.
The 2010s saw massive hikes in the abilities of GenAI, as new methods arose and large data sets became easier to access for reference.
The ceiling remains uncharted, however, as McDonald added: “Over the course of the next year, GenAI is likely to be further integrated into how we interact with the digital world, and perhaps beyond.
“The tools themselves seem set to continue to improve in their ability and accuracy, however, that isn’t inevitable.
“Importantly we are seeing a trend towards further integration of GenAI technologies into how we interact with the digital world, regardless of their ability.
“This will mean that GenAI will have a wider and deeper impact on more aspects of our lives – whether this impact is positive or negative will depend on the people and organisations using these tools, and whether they use them in ways that are trustworthy, ethical and inclusive.”
Since humanity learned to pick up sticks, debate has surrounded the danger of a tool versus its utility.
Today’s debates around the use of GenAI are no different.
As it evolves and becomes more readily available, questions arise over who should be able to wield such a complex stick.
Dr Zeerak Talat is a Chancellor’s Fellow in responsible machine learning and artificial intelligence at the University of Edinburgh.
“I think that there should be regulations,” he said.
“Particularly children become quite vulnerable here – if we can create photorealistic images of real children, then the space for harm becomes incredibly large.
“I think it’s vital that we think more carefully about how we regulate and how these tools are allowed to be used in our society and what kind of space we make for them.”
READ MORE: Scotland’s education sector warns of GenAI’s potential
Despite the seemingly limitless possibilities of these programmes, their practicality seems caged.
While other AI systems offer benefits to areas like education and scientific research, the generative kind struggles to find a solid use beyond that which a human hand can do better.
The key benefit of these AI is their simplicity – now that systems have seen the Sistine Chapel, they can produce a cheaper, quicker version, though this holds no integrity as a work of art.
The game then becomes for the public to keep their eyes peeled for what seems real but isn’t.
Dr Talat added: “If it seems too good to be true or even really, really awful, then it’s worth double-checking.
“For example, I saw a generated photo of a street somewhere unnamed in the UK with a lot of parked supercars someone had posted implying that the government were giving migrants and refugees supercars. It seems improbable on the face of it, right?
“So you have to look at them that way, thinking, is this a probable thing?
“One of the giveaways of the generated image was that none of the cars had legible license plates, they were just pixelated or some dark splatters on a white background.
“So looking at the finer details, like does this finger look a little odd? Should it bend that way?”
Key dates in the diary this year include the Children’s AI Summit on February 4, hosted by the Children & AI team in The Alan Turing Institute’s Public Policy Programme and Queen Mary University of London.
It will “bring together children from across the UK to share their messages for global leaders, policymakers, and AI developers on what the future of AI should look like”.
The Scottish AI Alliance also stresses that it is pushing for the country to be a leader in “trustworthy, ethical and inclusive AI” – as the focus on the direction of the integration of AI continues to grow.