In this, the fourth in the 'controversial question' series, we attempted to pour some cold water on AI in the IT industry. Obviously, and quite reasonably, the industry fights back.
This is the question we asked: "Someone once said to us, 'Machine Learning is written in Python, Artificial Intelligence is written in PowerPoint.' Do you agree, disagree, have an alternate view?"
The protagonists and the skeptics reveal themselves quite clearly in the responses.
Introduction
Pat Devlin, Director, South Pacific (ANZ), Aruba, a HPE Company doesn't like the question, "Do we agree [with the premise of the question]? Absolutely not! We do agree that machine learning can already be seen in action across the globe and it's been used to build AI systems that are live and active already.
"Maybe then it's a definition problem? Machine learning is the ability for systems to automatically learn and improve from experience without being explicitly programmed. So, what is AI? AI are the algorithms that machine learning allows us to build. It's the ability to simulate human thinking."
|
Amir Liberman, CEO of Nemesysco, counters that, "Artificial intelligence is a broad concept that covers many abstract ideas and approaches. Like PowerPoint, the AI concept is a good framework for presenting visionary ideas, while machine learning is a practical approach for mechanical processes that can be translated into lines of code, although limited to capturing logic based only on current inputs."
…and ever the cynic, Garrett O'Hara, Principal Technical Consultant at Mimecast ANZ offers, "With AI it seems that three things are true: the capability of AI is being overstated, the usefulness of AI is being understated, and when AI is being talked about, it's often confused with the subset of Machine Learning."
What is AI?
Garrett O'Hara suggests that "Often when people talk about AI, they are talking about the subset of ML. And at other times, when people talk about ML or AI they are just talking about regular programming that seems 'intelligent'. As an example, the fact that this word processor has been nudging me on grammar and spelling is not because it is intelligent. It is because the word processor is accessing a dictionary and knows the grammatical rules - a blunt and unintelligent instrument, but incredibly useful tool. Useful isn't always brand new and sexy."
Jason Duerden, Managing Director, BlackBerry Spark ANZ adds, "I agree with this sentiment [of the question posed] - Machine Learning (ML) is the class of technology where rapid innovation and advancements are being made. Meanwhile, Artificial Intelligence (AI) in its true ideological form - the kind that we see in Sci-Fi movies and TV, such as Skynet from the Terminator franchise or the androids in Raised by Wolves - does not exist. The ML technology that we have today is a subset of that ideology, yet AI is the term that gets used in flashy marketing materials. The popularisation of the term means we see it frequently on brochures and slide decks, but the promise rarely meets reality."
Ray Greenwood, Artificial Intelligence and Machine Learning Lead, SAS Australia and New Zealand counters, "I like the spirit of the quote yet I can't say I agree with it - a controversial statement is a good way to stir debate, however! The hype around Artificial Intelligence (AI) for several years now has indeed led to much disillusionment. Adding to this, I expect more 'inflated expectations' around AI are to come as organisations continue to find it easy to construct the underlying models that power AI applications but far harder to operationalise them. "
Thanks to Duerden, we have a useful definition: "DARPA, the US Government authority on scientific research, classes three waves of AI: Handcrafted Knowledge; Statistical Learning; and Contextual Adaptation.
- "Wave 1 is 'Handcrafted Knowledge' i.e. the origins of ML in basic heuristics that we see in signature-based antivirus, offering narrow areas of problem solving and no learning capability outside of curation. When the cyber security industry in 2014 realised that true Deep Learning and Machine Learning models could significantly change the game, vendors started classing any piece of machine-driven decision-making as ML. This creates chaos in a buyer's market with the majority of vendors marketing basic heuristics as AI.
- "Wave 2 is 'Statistical Learning' i.e. machine learning prediction, which is where BlackBerry Cylance started. Wave 2 significantly shifted the needle in the security industry, for the first time allowing defenders to predict an attack without ever having seen it before and reducing the amount of effort on detection and response - in many cases years in advance of the threat. In Wave 2, the ML engine offers limited explanation of strategy, limited adaptation to change and requires some human training labelling.
- "Wave 3 is ''Contextual Adaptation'' i.e. machine learning prediction, explanation and adjustment to concept flow, enabled by the use of deep learning neural networks for feedback loops and strategy. Wave 3 is emerging as the new frontier for cyber defenders, taking all of the prediction benefits of Wave 2 and now enabling explanation for faster human intelligence, prediction of non-supervised learning structures, such as user identity and authentication flows, and can automatically and autonomously re-train or update intelligence models."
But does AI really exist outside of the next capital-raising slide deck?
O'Hara explains that "The reason AI seems to be written in PowerPoint is that it gets confused and conflated with its children: Machine Learning, deep learning and Skynet. Artificial Intelligence as a term sounds interesting. Machine Learning (ML) sounds like its workhorse. Like so many things in our industry the reality of what AI/ML can deliver is very real but unfortunately sometimes overstated. Too often that overstatement happens on a PowerPoint slide."
Aruba's Pat Devlin seems to agree - "For years, there's been a debate in the IT Engineering community that machine learning is already widely used in industries across the globe, while AI arguably remains a theoretical concept. Are there really any intelligent machines?"
Transformation
Bede Hackney, ANZ Country Manager at Databricks suggests that "Data + AI has the potential to transform every industry. The issue for many organisations with AI is not having the correct architecture in place for it to be fully accessible and as such it remains an idea, rather than a fully supported and functioning aspect of a business.
"Whereas previously organisations have managed data for BI and reporting in 'data warehouses' and applied machine learning algorithms with data stored in 'data lakes', customer needs have evolved, and the need to have a unified data platform that connects these two types of data management paradigms has become more prevalent.
"Starting the other way round (with data warehouses) will continue to lead many companies to feel that AI can only live in PowerPoint presentations."
Greenwood continues, "The lack of clarity as to what constitutes AI has contributed to scepticism regarding the reality of its delivery. However, AI is indeed part of many facets of our lives - it's seamlessness (and a lack of roaming robots) means many are unaware of its presence. For instance, most call centres now engage AI models to power chatbots that seek to streamline the call-in process, managing requests without the requirement for human intervention or presenting a staff member with a customer once relevant contextual information indicates that a human's input is required to achieve an optimal outcome.
"As more business processes are enhanced using any combination of AI's underlying componentry including Machine Learning, Computer Vision, Natural Language Processing, Forecasting and Optimisation, the exact definition of AI will matter less than having an organisation wide understanding of how these capabilities can modernise and transform business processes. Where historically the focus has been on creating algorithms and techniques, the future will depend on successful cultural change and analytic literacy programmes to ensure any AI initiative is implemented and adopted in a sustainable fashion."
Is it 'artificial reality?'
We had to wonder just how 'real' AI is. Liberman suggests, "To offer real intelligence from voice analytics, our approach captures the reality as perceived by humans - these are the genuine emotions of a person. True emotional reactions and personality style are what drives our decision making as human."
In real situations
Garrett O'Hara, while contemplating advanced in IT security suggests that "Artificial Intelligence that has utility in cybersecurity would be of the 'narrow AI' type (or weak AI). That means the intelligence would be applicable to a particular problem type, threat or security area - and that is all. The AI used to detect a human face is tuned to only do that, as such it won't be particularly good at understanding traffic flow patterns (despite potentially similar algorithmic approaches). Weak AI is not going to become self-aware, have an identity crisis and decide that to be fulfilled it should be a cyber-carpenter in Byron Bay. The Turing Test doesn't even rate a mention."
O'Hara continues, "What about cybersecurity? We're not at a point where we can have AI that oversees the security of an organisation with an iron fist - as seen in The Matrix. But if you build an Artificial Intelligence using the subset of Machine Learning to understand what all the users in an organisation do 'normally' it allows it to then 'notice' what is 'not normal' and raise its virtual eyebrow, or a Security Operations Centre (SOC) alert. That notable difference can be as bread-and-butter as reviewing email sending patterns: why did Alice say 'hey Bob' instead of 'hi Bob, or why is Bob sending an email at 5:31am on a Saturday morning from an IP address 15,000 kilometres from where he was last located 12 hours ago?
"What about the billions of URLs that get sent every day? Buried in there are URLs that will drop ransomware or facilitate credential theft. Machine Learning can be used to parse enormous datasets and with supervised learning, can get very good at recognising the patterns that match up with dangerous URLs. Malicious URLs are ephemeral so a list of known good/known bad only gets you part of the way there in terms of protection. Machine Learning gets you further. Then you hope the next hype-wave of HI, or Human Intelligence, gets you the rest of the way. I.e. Alice notices Bob's email just said "hi" not "hey" and he never works Saturday mornings."
Duerden continues, "In the cyber security industry, AI and ML are often marketed as best thing since sliced bread. As Josh Zelonis, Principal Analyst at Forrester, so eloquently put it: ''When the endpoint detection and response market was getting started, there was a lot of pushback, ranging from privacy concerns to what the acceptance of a second security agent on endpoints would be (apparently, it was never going to happen). Then something incredible happened… [intelligent anti-malware, based on behavioural analysis] burst onto the scene, disparaging signature-based malware detection and ushering in the age of AI/machine-learning marketing.
"The use of ML in cyber security changed the industry forever: on the one hand, it truly revolutionised endpoint security solutions; on the other, AI became a buzzword that vendors tacked onto existing products. No longer was innovation related to research, development and patents; suddenly, innovation was about fancy infographics and the mere presence of the term AI."
But what about the training sets, Pat Devlin wonders. "What's often overlooked in this discussion about AI is the amount of human intelligence that goes into training the machines. The more diverse types of data we have access to, the more learning that goes into training AI engines to simulate human thinking.
"AI is out there, and it's built on human thinking and human thinking is imperfect, it's biased and prone to prejudice. One of the flaws of AI is that if we program data based on human input, we don't get perfect decision making. We get all the prejudice and bias that we've taught the machine along the journey and we may even make it harder to analyse and understand!
"This is true AI. It may not be what the science-fiction novels and films have promised, it might not yet beat a Turing Test but it's still very much more than just PowerPoint. It's real, it's alive and there are plenty of examples across industries of how this has been implemented with technological solutions. We think of AI as a system with the ability to simulate human thinking. If this is true, we're well past AI as a concept (or a PowerPoint) and we are already delivering!"
In summary, Greenwood quite reasonably observes that, "Whether the code for modern enterprise analytics platforms are written in Python, SAS, R, Lua, Julia, TensorFlow or something else, if the business case wasn't written in the boardroom in plain English, the project probably won't last long in the real world when time comes to migrate it from the lab environment to the real world."
Thoughts and Conclusions
Greenwood asserts that, "We are at a unique juncture in history where we have the technical capability to implement paradigm changing technology like AI, however, if we fail to properly manage the big picture around AI in terms of its usage being fair, accountable, transparent and ethical - and a few more considerations besides these - then we may simply enter the next 'AI winter'. In this case, the real promise of broad AI uptake may mostly remain relegated to PowerPoint for some, and we will have failed to fully take advantage of what AI can deliver - machines augmenting how humans work and play to improve our lives."
Pat Devlin adds, "While AI seems very sensational and headline grabbing, the potential it presents for our gain is exponential over time, as it takes massively complex problems and creates simple solutions.
"It's not conceptual, it's live and it's solving real world networking problems!"
To counter that, Duerden warns "The overuse of Artificial Intelligence as a buzzword slapped onto brochures and slide decks has distorted its meaning and created a cloud of confusion surrounding AI and ML. Given the uncertainty in the market, it's important to ask vendors touting AI how it actually works. More often than not, they won't be able to tell you!"
However, we should leave the final word to Garrett O'Hara. "The functionality may have been written in Python but the utility in those situations is meaningful and warrants a place in a PowerPoint presentation. We just need to be more measured, and clear, in how we have the cyber security conversation about ML/AI.
So yes, Machine Learning is written in Python, Artificial Intelligence in PowerPoint, and Human Intelligence in a sharpie on the back of a toilet door.
All we can say is, may all toilet doors be this blessed.