Much has been written about the incredible power of AI and the supposedly impending singularity. Many people have expressed concern about just how advanced predictive algorithms and chatbots have become.
Indeed, AI’s advancements over just the past several years are making huge waves across every industry. It’s so effective, that articles like this one have to undergo testing just to prove they weren’t written by sophisticated bots.
But even so, AI has a long way to go before it rivals the true creativity and ingenuity of humans. AI is smart, but it still sometimes fails at basic tasks even young children can perform successfully.
For the time being, at least, even the most advanced machine learning models still rely on human expertise. Here are three big reasons we still need living, breathing mortals to evaluate our data.
1. Data Cleanup
Many use cases for AI involve inputting large sets of data to be analyzed for repeating patterns. These data sets could have hundreds of thousands of entries and come in different formats from a number of sources. Once the data is gathered and unified, machine learning algorithms use it to diagnose problems or make recommendations.
To generate reliable results, however, AI models need data that has been cleaned and pre-processed for this purpose. Data pre-processing involves removing duplicates, and correcting for irregularities, anomalies, and other outliers that could skew the results produced. Some AI models can help with data cleaning, but for accuracy, the process must at least be supervised by humans.
Predictive analytics provide one good example of why human oversight is essential for successful AI. In predictive analytics, algorithms crawl through data and generate hypotheses about the future. But if they rely on messy data, they might falsely assume a one-time event is a repeat occurrence. A human eye is needed to sift through the data and remove or correct for these events.
According to Two Story, performance analytics can help sales teams improve performance by showing team members where they fell short. But in order for those analytics to be accurate, they need to rely on good, clean data. An associate’s numbers could be lower one quarter because he was out for weeks on medical leave. But that associate’s performance report will be wildly inaccurate if the data doesn’t account for that missed time.
2. Ethical Issues
AI models can have all sorts of ethical quandaries to consider, and those questions can mean life or death. And AI is pretty good at observing patterns and sharing information, but generally terrible at making big decisions. It’s also not foolproof at visually identifying objects, which is why we get stuck identifying boats in those annoying CAPTCHAs.
We have to train AI to make it understand what things look like and how to react. But even with this training, it still makes mistakes, so humans must continuously update and correct it. The need for human involvement in AI becomes especially obvious when it comes to programming self-driving vehicles. AI can’t quite handle the ethical and categorical nuances it takes to do the job.
For instance, the machine learning models that power self-driving cars need to be taught by humans how to react in certain situations. In a potential accident, a self-driving vehicle might have to decide whether to prioritize protecting a passenger or a pedestrian. It also needs to make sure it’s correctly identifying features on the road.
In Tempe, Arizona, a self-driving car experiment was shut down when a test vehicle killed a pedestrian. Because the pedestrian was pushing a bicycle, the car couldn’t recognize that pedestrian as a human. A human subject in the car would’ve been able to help the car understand and stop. Additionally, more-comprehensive, human-powered data evaluation prior to the experiment might’ve resulted in a better-trained car.
3. Improving AI Models
Human expertise is perhaps most crucial when it comes to producing helpful — and safe — AI for the future. After all, the knowledge it gives us will only be as good as the human wisdom we feed it. Some folks worry that advanced AI will eventually wipe out jobs and ruin careers. But so far, the opposite is true: millions of workers are tasked with using their expertise to train AI.
Of course, there’s the obvious: the engineers and data scientists who build the AI in the first place. Plus there are the software solutions providers who find ingenious ways to apply AI for practical uses like sales forecasts. But there’s also an enormous class of workers whose main function is to evaluate and improve AI. These people spend hours each day interacting with and studying AI.
For example, many major technology corporations are working to try and produce the best conversational AI bot. They will then use these bots to update web search functions and assist online shoppers, among other uses. But at this stage, the bots are prone to producing unhelpful or flawed data. In one case, one mega tech firm’s chatbot may have intentionally lied to and insulted users.
To improve these AI chatbots, also known as large language models, skilled workers ask the bots specific questions. Then, using their expertise in writing or other specialized knowledge areas, they evaluate and suggest improvements in the bot’s responses. Over time, these corrections train the bots to produce better responses. But this wouldn’t be possible without the skilled labor of a vast number of human beings.
Have You Had Your Turing Test Today?
The world is still a long way from AI that can truly mimic human behavior or offer perfect predictions for your KPIs. Though the technology is getting more sophisticated by the day, its basic abilities can still get pretty comically wonky.
For instance, even a short session with ChatGPT will quickly reveal its flawed reasoning and other limitations. Just try sending it an essay and asking the bot to identify any sentences that contain more than 20 words. It won’t be long before you realize these bots still desperately need human help to function.