Unpacking AI: Advancements, Dilemmas, and the Potential for Impact

·

4 min read

Artificial Intelligence (AI) has been the talk of the town for quite some time now, and with good reason. With AI increasingly being integrated into our daily lives, it's essential to understand

what it is, how it works, and the potential problems it might cause.

AI is a computer system that thinks in ways that mimic human intelligence. It involves machine learning, neural networks, and natural language processing, among other things. The first and most important aspect of AI is machine learning, where programs find patterns in data and use them to make accurate predictions.

Take Google Lens, for example. If you take a picture of a cat, it can recognize it, tell you what type of cat it is, and use the data from the picture to train itself on what future cats could look like.

Neural networks are another kind of machine learning system modeled after the structure of the human brain. They take in all the information and weigh the significance of different pieces of data to get to the conclusion that is being sought. However, even the most advanced AI can run into strange errors that we would never have expected.

For example, when an AI system was trained to recognize images of a specific fish called a tench, it would show human fingers on a green background every time someone asked to see a tench. That's because almost every photo of a tench online is a photo of someone holding the fish as a trophy after fishing. The AI had no idea what a fish was, so it instead trained itself to think that the fingers around the fish were a part of it.

Another key aspect of AI is natural language processing, where machines learn how to use human language by training on the split-up and broken-down grammatical components of speech and writing. This is a lot harder than it sounds since machines need to understand all the subtle human intricacies that we can't calculate.

Despite the challenges, AI has made significant advancements. For instance, AutoDraw is a software developed by a team at Google that uses a neural network to analyze what someone is drawing and offers to make it for them. It's even better than the drawing last time since it can recognize what the person is trying to make.

As AI progresses and becomes increasingly intertwined with our everyday routine, it is crucial to contemplate the potential predicaments and consequences it may bring. The foundation of all AI relies on one fundamental concept - big data - which refers to colossal accumulations of diverse data that are so vast it would take hundreds of human lifetimes to peruse through.

While this data can be processed in minutes or even seconds by computers, the implications are enormous. Computers can now perform calculations at a rate of one quintillion per second, which is hard for us to even comprehend.

While the images of cats doing handstands on Mars in the style of Picasso from the stable diffusion AI are new and original, they are only so because the AI tools have learned how to make them by combing through millions of pictures online of other people's cat shots.

This begs the question: should companies pay a commission or credit to those whose work contributed to training their AI?

But then, how would they decide who gets how much, given that every search request uses a different amount of data from each party who's contributed to it?

The problem of AI-generated content goes beyond art. AI can also write essays, with chat GPT becoming the go-to tool for students who have entire essays written for them. In a recent survey, half of the students admitted to using AI-generated essays, raising questions about the authenticity of their academic work.

How do we decide what constitutes original content, and what is simply a product of an AI algorithm?

The biggest challenge with AI-generated content, however, is not its authenticity but the potential for bias. AI programs are only as good as the data they are trained on, and the data we feed them comes with human biases. For instance, Microsoft's Bing search engine, powered by chat GPT, acts like an AI co-pilot, helping users find answers to their questions. While the idea of asking a super-intelligent robot for answers is fascinating, the AI's responses are not free from human biases. This raises concerns about the accuracy and fairness of the information provided by AI-powered tools.

Although it's simple to suggest that we should create assessments that take into account the utilization of AI, implementing this idea is much more challenging. AI technology is constantly advancing and becoming more sophisticated, which makes it more challenging to differentiate between human and AI-generated work. Even if we succeed in creating assessments that account for AI-generated content, we are still confronted with the issue of evaluating that content's worth.

To sum up, AI holds enormous possibilities, but it also carries the risk of significant issues. Therefore, it's crucial to thoroughly evaluate its progress, challenges, and possible consequences.