Unfortunately, there are some practical boundaries that are currently facing the industry, especially outside of the tech giants like Google, Facebook and Microsoft. In this article we highlight some of the biggest practical challenges facing the artificial intelligence and machine learning sectors today.
Access to clean data
For companies looking to apply AI to any number of areas, getting large amounts of clean data is going to be one of the biggest challenges.
Artificial intelligence and machine learning rely on data, and often on the creation of new databases, to analyse trends and patterns in behaviour in order to arrive at conclusions. The quality of the data is of paramount importance as it needs to be widely representative and balanced.
Having too little data or inputting inaccurate data can cause AI systems to make incorrect predictions. This can lead to the wrong insight being gleamed and business decisions being made based on falsehoods, the end result of which could be huge financial loss.
Most of the larger brands have been aware of the value of data for years. The ad market in particular has helped companies to recognise the value of first-party data. As a result, many companies have been investing heavily in creating the infrastructure to collect and store the data they generate, as well as recruiting the talent capable of making use of it. Those that are further ahead in this area will find that they have a competitive advantage in integrating AI into their businesses.
For smaller companies and startups, gathering first hand data is more of an issue, especially in light of the increasing cost of acquiring third-party data. A great deal of time and resource is required to gather enough data for use in machine learning.
There is also the question around data governance. A small number of companies own and are continuing to collect vast swathes of data about each of us. This data can be used to paint an accurate picture of our interactions, activities, like and dislikes, relationships and social media activity. Expanded to a population, this data would provide a great deal of power and influence to those who control this data.
It will be interesting to see what impact GDPR has on this when it comes into force in May 2018. The regulation includes reference to a “right to explanation”, which is intended to improve transparency and accountability for machine-assisted decision-making. Its impact, however, will depend on how national courts across EU interpret and apply it.
Master of One
The AI systems of today are specialists. They are capable of performing specific tasks, like playing games, identifying objects or determining when the best time to publish a story is. Ask them to do all three at the same time, however, and they will most likely fail.
Specialised AI is created to learn and become better at one specific task. It does this by simulating what would happen given every combination of input values, and measuring the results until the most effective result is achieved.
Because of this, AIs have to be taught to ensure that their solutions do not cause other problems in areas beyond those which they are designed to consider. And this is where conflicting AIs could cause problems. In a smart city, for example, it’s easy to see how a conflict could arise between the AI system in charge of road lighting and the one in charge of regulating power usage.
On the other hand, intelligent organisms, like humans, are capable of learning from previous tasks and are able to apply this information to something we’re currently working on. This “out-of-the-box” thinking is an element of human problem-solving and ingenuity that today’s AIs are unlikely to emulate in the near future.
The idea of AI has been around for a long time. But until very recent technological advances, it was just that; an idea. Cloud-based and Massively Parallel Processing (MPP) systems however, have allowed the idea to grow and develop to the stage where any size of business can experiment with AI on their existing infrastructure.
At some point though, businesses treading this path will likely reach a point where they push their server performance to the limit – leading to false conclusions being drawn and subsequently, real consequences to the success of the business.
Wall Street for example, is a big proponent of using AI models for stock picks. By using machine learning, it’s possible to number crunch millions of data points and thousands of stocks in real time to guide their actions. If a competitor is using a processor that can make these calculations even a few seconds faster than your own, it could have serious implications for your investments.
In order to further develop AI systems, we need bigger data volumes to help learning systems produce more complex algorithms. This will require much more advanced processors, high performance storage and greater I/O bandwidth to ensure your GPUs are constantly fuelled with data. Without these advanced capabilities, your data will never reach its full potential.
Future Proof Your Infrastructure
Power Systems by IBM are the fastest and simplest way for you to roll out accelerated databases and deep learning frameworks.
The POWER9 processor lies at the centre of the IBM Power Systems suite – the only range of servers with a combination of state-of-the-art I/O subsystem technology, including next generation NVIDIA NVLink, PCIe Gen4, and OpenCAPI. These interfaces give POWER9 a data bandwidth superhighway that delivers your insights, faster.
Is your organisation experimenting with AI? Or are you looking to expand your capabilities? If you want to discover how you can get architecture which will maximise the performance of your AI, download our free whitepaper: Power up your AI.