It sounds strange that a tech company who is mainly developing Data-Driven A.I. solutions titles an article "Why A.I. is doomed to fail". That's because we know better than others why we have seen A.I. applications failing to succeed in the industrial and manufacturing sector. A.I. is not doomed to fail. To the contrary, it is doomed to take companies to success when implemented correctly.
So, what are the leading causes that jeopardise the adoption of A.I. inside industrial companies? In this short post, we are mentioning the top three reasons for that.
1) Starting Big
Implementing A.I. in any company, it's a somewhat disruptive process. People from different departments and functions are involved in the project (e.g. implementing predictive maintenance solutions) and the amount of data necessary to feed the algorithms increase almost exponentially with the machines involved in the process. As complexity increases, the time to adoption increases and people quickly lose momentum and the project risks being trashed with the excuse that either the company is not ready or the technology is not advanced enough.
To avoid hitting a brick wall during the adoption phase, we have seen that smaller projects led by a task force of highly motivated people that uses the most favourable setting in the company to adopt A.I. is returning the best chances of success. This follows the Kaizen approach, according to which small changes can activate and lead to tremendous results if transformation is conducted progressively and not with a quantum-leap approach.
Small projects, have faster adoption time and lead to quicker productivity reward that can convince the rest of the company to adopt the new solution.
2) Use only some data
Would you run for the first time a rugged trail with a brand-new bike and an eye-patch? I would say that it is not the easiest way to succeed, and I would not blame myself if I couldn't spot an anomaly or something going-on on my blindspot.
That is to say that the adoption of A.I. powered solutions in the industrial and manufacturing sector often fails because the algorithms have blindspots on what's going on in the factory. It happens that not all the data are made available, and this nullifies the potential of the algorithms. Machine Learning needs many examples to understand what is going on, for instance, to spot a machinery anomaly in advance.
Not feeding all the data necessary to learn is making the process lengthier at best if not impossible. That's why it is so essential moving to A.I. implementation once all data can be made available to the software.
3) Not considering human activity
Last but not least, one of the top three reasons why companies fail to adopt A.I. solutions in the industrial production environment is that algorithms are only fed with data coming from sensors and human activity is not considered at all.
To the contrary, sensors data are of little help if not coupled with the information coming from the production and maintenance activity. Not collecting information about how people in the plant interact with the machines and not keeping track of their actions often make anomaly detection very tricky for A.I. because these data blindspot causes algorithms to mistake anomalies for maintenance activities and vice versa.
That is why recently we have launched Scops Q-Track, which allows very quickly to keep track of all the human interactions with machines in a way that creates a 360-degree view on what's happening in the plant. This tremendously empowers industrial A.I. because for the first time it makes available not only IoT data but also a wealth of people interactions with machines, such as maintenance activities, usage and human-based anomaly signalling.
A.I. is doomed to make companies more successful only when appropriately implemented. In this short article, we have summarised the top three mistakes that make A.I. adoption doomed to fail.
Starting from big instead of small pilot projects increases the risk that A.I. implementation runs aground because the level of complexity and disruption is too large to be managed.
Also, not making available all the data necessary and creating data blindspots for the algorithms make A.I. not successful in spotting anomalies and understanding trends. This makes A.I. driven results disappointing and slows down the adoption process, even if A.I. is not directly to blame.
Last but not least, A.I. needs human in the loop to learn and perform at best. Sensor data are not enough to understand the complexity of a production plant, and tracking human activity is crucial to give A.I. all the information needed. Keeping records of human-machine interactions is vital, but at the same time, too time-consuming if the right digital solution is not adopted to make this process very efficient and smooth.
Scops Q-Track aims exactly at keeping track of all the human-machine activities carried out in the plant. It does not require people to record data but instead offers a platform so handy and useful to employees and engineers to share information that they will be creating data-logs without even noticing.
The Four V of Big-Data
When we refer to Big-Data, we do not consider databases with significant Volume of data, but something more complex.
Big-Data is not only about Volume, but is also about Velocity, Variety and Veracity. All together they make the Four V of Big-Data.
Big-Data is generated at a high-velocity. Think about the speed at which we produce data just by browsing Facebook, Thousands of records in minutes.
Another critical characteristic is the Variety of data. Big-Data comes in various forms, text, images and sound. Large Volumes and Variety is what makes Big-Data quality and readability quite uncertain. Here it comes the necessity to check their Veracity and distil them as we do for oil to transform it into gasoline.
Did you know?
The amount of data we produce every day is increasing exponentially. Nowadays, almost everything we do leaves a digital trace.
And we are not just talking about structured data organized into tables. Today, unstructured data such as videos, pictures, text etc. make up 80% of the world's data.
This gives companies access to unprecedented amounts of data they can use to get useful insights. From logistics to marketing and customer service, no business unit won't be transformed by data.
If Artificial Intelligence is a car ...
Machine Learning, Artificial Intelligence, Big Data… We hear these words non-stop nowadays, but the confusion around these topics and the connections between them remains. We want to shed some light on these concepts and, to do this, we are going to use … a car. I know what you are thinking: “Wait! Weren’t we talking about Artificial Intelligence?!” Don’t worry, you’ll understand in a second. For now, just trust us. Think of Artificial Intelligence as a car, fasten your seatbelts and let’s get started!
… Big Data is the oil
It was 2017 when the Economist published a popular article stating that Data is the new oil. The article was referring to the value of data in today’s digital economy, and facts are proving its observations to be right. However, when we talk about Artificial Intelligence, the same metaphor holds. Data is the new oil in the sense that modern Artificial Intelligence applications need data to work as much as a car needs fuel to move. Careful though, oil alone is not enough!
… Smart Data is the fuel
If you put oil in your car’s gas tank, consequences won’t be pleasant. That’s why when we go to a gas station they will sell us gas and not oil. And gas is, simply put, a “purified” version of oil. And there is more, the higher the quality of the gas, the higher your car’s performance will be. Same story for Data. If you want to get useful results from your Artificial Intelligence applications, Big Data is not enough. Smart Data is what you need and Smart Data is what you get when you clean, filter and transform Big Data. (See “Big Data: Using SMART Big Data, Analytics and Metrics To Make Better Decisions and Improve Performance” by Bernard Marr for more on Smart Big Data)
… and Machine Learning is the engine
Any car needs its engine, and here is where Machine Learning comes into play. Nowadays, Machine Learning algorithms are the beating heart of Artificial Intelligence, as much as the engine for a car. And, just like an engine, they need fuel (Data) to work. Machine Learning is a family of algorithms that derives from different branches of statistics and applied mathematics. Machine Learning algorithms learn from data, the more (Smart) data they have to learn, the better and more accurate the results will be.
Finally, a car!
Oil, refineries and engines wouldn’t be so crucial if we didn’t use them for something. In our metaphoric game, we have picked a car as the final use of natural resources and technology. By the same token, Big-Data, Smart-Data and Machine Learning wouldn’t be so useful without their support to the application of Artificial Intelligence.
This doesn’t mean that the fuel is useless alone. Even though cars (both electric or internal combustion ones) cannot merely exist without electricity or gasoline, you can still do pretty useful things with electricity and gasoline alone (e.g. heating). Same for data and, for instance, its applications in business intelligence.
...but an intelligent one.
If you followed us till now, you would feel disappointed if we didn’t make the last mile, which is the intelligent component. So what makes this car (this application) intelligent? Simplifying a bit, we talk about Artificial Intelligence any time a machine does something that we perceive as intelligent or that would have required human intuition to be carried out. So that’s why we pushed our parallel to the intelligent car. It surprises us. It can drive (maybe only for a few seconds for the time being) without our intervention and can alert us if something is going wrong and take action. The same holds for Artificial intelligence; it is meant to improve our productivity, our life experience, and not to substitute us.
How to prepare your company for A.I.
If you have followed us through this “explain like I’m 5” game, now you understand why it’s essential for companies to prepare for A.I. applications. Artificial intelligence doesn’t just happen. It needs Data (the oil) to run. These data have to be cleaned and refined to turn into Smart Data (the gas). Smart Data is fed to Machine Learning algorithms (the engine) to make it work. And only after all the components are in place, the car will move.
The subtle difference between an intelligent car and A.I. application is that the fuel needed for the vehicle is, in most cases, something that you cannot buy at the gas station. It is instead something that you need to source and drill internally to the company. That is why it is so important for companies to invest in data storage, validation and cleaning. If you are not investing now in building your oil reservoir, you will have tough times in applying A.I. in your company.
Click To Know More
Over the last weeks, COVID-19 has monopolized the attention of mass media. Indeed, the consequences of the pandemic are unprecedented and heavily impact all spheres of our lives.
Leveraging Big Data Analytics we are mapping the discussion surrounding the pandemic in traditional news media (i.e online editions and videos published by major news sources) in Italy, UK, US and Canada. The new feature of our News Tracker allows users to explore the evolution of interest in different news topics.
As an example, the topic “Boris Johnson” (UK Prime Minister) breaks into the debate significantly around the 16th of March, after that UK adopted the “herd immunity” strategy against COVID-19, causing mixed reactions.
In a connected world, where news run at the speed of light in fibre optics, this approach sheds light on where the attention of people was, is, and is going, and offers additional insights to interpret the behaviour of people in such turbulent times.
We are honoured to collaborate with researchers from Greenwich University and ISI Foundation for this project on tracking and mapping news on COVID-19.
What is the project about?
COVID-19 News Tracker is a tool aimed at mapping the discussion surrounding the COVID-19 pandemic in traditional news media. The goal is to help users exploring the latest development considering a multitude of sources.
For the analysis, we consider Articles and Youtube Videos published by major traditional news media organizations in Italy, United Kingdom and United States.
Currently, the tracker supports the following features:
How can I access the results?
The results of the research are openly available on our data-automation and Machine Learning platform Scops. Visit covid19.scops.ai to explore them!