What type of computer system can recognize and act on patterns or trends that it detects in large sets of data and is developed to operate like a human brain?

What type of computer system can recognize and act on patterns or trends that it detects in large sets of data and is developed to operate like a human brain?

What type of computer system can recognize and act on patterns or trends that it detects in large sets of data and is developed to operate like a human brain?
What type of computer system can recognize and act on patterns or trends that it detects in large sets of data and is developed to operate like a human brain?

Get the answer to your homework problem.

Try Numerade free for 7 days

What type of computer system can recognize and act on patterns or trends that it detects in large sets of data and is developed to operate like a human brain?

Indiana Institute of Technology

David G.

AP CS

9 months, 1 week ago

We don’t have your requested question, but here is a suggested video that might help.

A(n) ________ is a type of intelligent technique that finds patterns and relationships in massive data sets too large for a human to analyze. a) expert system b) genetic algorithm c) neural network d) CAD e) inference engine

Robots and driverless cars

The desire for robots to be able to act autonomously and understand and navigate the world around them means there is a natural overlap between robotics and AI. While AI is only one of the technologies used in robotics, AI is helping robots move into new areas such as self-driving cars, delivery robots and helping robots learn new skills. At the start of 2020, General Motors and Honda revealed the Cruise Origin, an electric-powered driverless car and Waymo, the self-driving group inside Google parent Alphabet, recently opened its robotaxi service to the general public in Phoenix, Arizona, offering a service covering a 50-square mile area in the city.

Fake news

We are on the verge of having neural networks that can create photo-realistic images or replicate someone's voice in a pitch-perfect fashion. With that comes the potential for hugely disruptive social change, such as no longer being able to trust video or audio footage as genuine. Concerns are also starting to be raised about how such technologies will be used to misappropriate people's images, with tools already being created to splice famous faces into adult films convincingly.

Speech and language recognition

Machine-learning systems have helped computers recognise what people are saying with an accuracy of almost 95%. Microsoft's Artificial Intelligence and Research group also reported it had developed a system that transcribes spoken English as accurately as human transcribers.

With researchers pursuing a goal of 99% accuracy, expect speaking to computers to become increasingly common alongside more traditional forms of human-machine interaction.

Meanwhile, OpenAI's language prediction model GPT-3 recently caused a stir with its ability to create articles that could pass as being written by a human.

Facial recognition and surveillance

In recent years, the accuracy of facial recognition systems has leapt forward, to the point where Chinese tech giant Baidu says it can match faces with 99% accuracy, providing the face is clear enough on the video. While police forces in western countries have generally only trialled using facial-recognition systems at large events, in China, the authorities are mounting a nationwide program to connect CCTV across the country to facial recognition and to use AI systems to track suspects and suspicious behavior, and has also expanded the use of facial-recognition glasses by police.

Although privacy regulations vary globally, it's likely this more intrusive use of AI technology -- including AI that can recognize emotions -- will gradually become more widespread. However, a growing backlash and questions about the fairness of facial recognition systems have led to Amazon, IBM and Microsoft pausing or halting the sale of these systems to law enforcement.

Healthcare

AI could eventually have a dramatic impact on healthcare, helping radiologists to pick out tumors in x-rays, aiding researchers in spotting genetic sequences related to diseases and identifying molecules that could lead to more effective drugs. The recent breakthrough by Google's AlphaFold 2 machine-learning system is expected to reduce the time taken during a key step when developing new drugs from months to hours.

There have been trials of AI-related technology in hospitals across the world. These include IBM's Watson clinical decision support tool, which oncologists train at Memorial Sloan Kettering Cancer Center, and the use of Google DeepMind systems by the UK's National Health Service, where it will help spot eye abnormalities and streamline the process of screening patients for head and neck cancers.

Reinforcing discrimination and bias 

A growing concern is the way that machine-learning systems can codify the human biases and societal inequities reflected in their training data. These fears have been borne out by multiple examples of how a lack of variety in the data used to train such systems has negative real-world consequences. 

In 2018, an MIT and Microsoft research paper found that facial recognition systems sold by major tech companies suffered from error rates that were significantly higher when identifying people with darker skin, an issue attributed to training datasets being composed mainly of white men.

Another study a year later highlighted that Amazon's Rekognition facial recognition system had issues identifying the gender of individuals with darker skin, a charge that was challenged by Amazon executives, prompting one of the researchers to address the points raised in the Amazon rebuttal.

Since the studies were published, many of the major tech companies have, at least temporarily, ceased selling facial recognition systems to police departments.

Another example of insufficiently varied training data skewing outcomes made headlines in 2018 when Amazon scrapped a machine-learning recruitment tool that identified male applicants as preferable. Today research is ongoing into ways to offset biases in self-learning systems.

AI and global warming

As the size of machine-learning models and the datasets used to train them grows, so does the carbon footprint of the vast compute clusters that shape and run these models. The environmental impact of powering and cooling these compute farms was the subject of a paper by the World Economic Forum in 2018. One 2019 estimate was that the power required by machine-learning systems is doubling every 3.4 months.

The issue of the vast amount of energy needed to train powerful machine-learning models was brought into focus recently by the release of the language prediction model GPT-3, a sprawling neural network with some 175 billion parameters. 

While the resources needed to train such models can be immense, and largely only available to major corporations, once trained the energy needed to run these models is significantly less. However, as demand for services based on these models grows, power consumption and the resulting environmental impact again becomes an issue.

One argument is that the environmental impact of training and running larger models needs to be weighed against the potential machine learning has to have a significant positive impact, for example, the more rapid advances in healthcare that look likely following the breakthrough made by Google DeepMind's AlphaFold 2.

Computer vision is a field of artificial intelligence (AI) that enables computers and systems to derive meaningful information from digital images, videos and other visual inputs — and take actions or make recommendations based on that information. If AI enables computers to think, computer vision enables them to see, observe and understand.

Computer vision works much the same as human vision, except humans have a head start. Human sight has the advantage of lifetimes of context to train how to tell objects apart, how far away they are, whether they are moving and whether there is something wrong in an image.

Computer vision trains machines to perform these functions, but it has to do it in much less time with cameras, data and algorithms rather than retinas, optic nerves and a visual cortex. Because a system trained to inspect products or watch a production asset can analyze thousands of products or processes a minute, noticing imperceptible defects or issues, it can quickly surpass human capabilities.

Computer vision is used in industries ranging from energy and utilities to manufacturing and automotive – and the market is continuing to grow. It is expected to reach USD 48.6 billion by 2022.1

How does computer vision work?

Computer vision needs lots of data. It runs analyses of data over and over until it discerns distinctions and ultimately recognize images. For example, to train a computer to recognize automobile tires, it needs to be fed vast quantities of tire images and tire-related items to learn the differences and recognize a tire, especially one with no defects.

Two essential technologies are used to accomplish this: a type of machine learning called deep learning and a convolutional neural network (CNN).

Machine learning uses algorithmic models that enable a computer to teach itself about the context of visual data. If enough data is fed through the model, the computer will “look” at the data and teach itself to tell one image from another. Algorithms enable the machine to learn by itself, rather than someone programming it to recognize an image.

A CNN helps a machine learning or deep learning model “look” by breaking images down into pixels that are given tags or labels. It uses the labels to perform convolutions (a mathematical operation on two functions to produce a third function) and makes predictions about what it is “seeing.” The neural network runs convolutions and checks the accuracy of its predictions in a series of iterations until the predictions start to come true. It is then recognizing or seeing images in a way similar to humans.

Much like a human making out an image at a distance, a CNN first discerns hard edges and simple shapes, then fills in information as it runs iterations of its predictions. A CNN is used to understand single images. A recurrent neural network (RNN) is used in a similar way for video applications to help computers understand how pictures in a series of frames are related to one another.

Learn more about machine learning

The history of computer vision

Scientists and engineers have been trying to develop ways for machines to see and understand visual data for about 60 years. Experimentation began in 1959 when neurophysiologists showed a cat an array of images, attempting to correlate a response in its brain. They discovered that it responded first to hard edges or lines, and scientifically, this meant that image processing starts with simple shapes like straight edges.(2)

At about the same time, the first computer image scanning technology was developed, enabling computers to digitize and acquire images. Another milestone was reached in 1963 when computers were able to transform two-dimensional images into three-dimensional forms. In the 1960s, AI emerged as an academic field of study, and it also marked the beginning of the AI quest to solve the human vision problem.

1974 saw the introduction of optical character recognition (OCR) technology, which could recognize text printed in any font or typeface.(3) Similarly, intelligent character recognition (ICR) could decipher hand-written text using neural networks.(4) Since then, OCR and ICR have found their way into document and invoice processing, vehicle plate recognition, mobile payments, machine translation and other common applications.

In 1982, neuroscientist David Marr established that vision works hierarchically and introduced algorithms for machines to detect edges, corners, curves and similar basic shapes. Concurrently, computer scientist Kunihiko Fukushima developed a network of cells that could recognize patterns. The network, called the Neocognitron, included convolutional layers in a neural network.

By 2000, the focus of study was on object recognition, and by 2001, the first real-time face recognition applications appeared. Standardization of how visual data sets are tagged and annotated emerged through the 2000s. In 2010, the ImageNet data set became available. It contained millions of tagged images across a thousand object classes and provides a foundation for CNNs and deep learning models used today. In 2012, a team from the University of Toronto entered a CNN into an image recognition contest. The model, called AlexNet, significantly reduced the error rate for image recognition. After this breakthrough, error rates have fallen to just a few percent.(5)

Computer vision applications

There is a lot of research being done in the computer vision field, but it’s not just research. Real-world applications demonstrate how important computer vision is to endeavors in business, entertainment, transportation, healthcare and everyday life. A key driver for the growth of these applications is the flood of visual information flowing from smartphones, security systems, traffic cameras and other visually instrumented devices. This data could play a major role in operations across industries, but today goes unused. The information creates a test bed to train computer vision applications and a launchpad for them to become part of a range of human activities:

  • IBM used computer vision to create My Moments for the 2018 Masters golf tournament. IBM Watson watched hundreds of hours of Masters footage and could identify the sights (and sounds) of significant shots. It curated these key moments and delivered them to fans as personalized highlight reels.
  • Google Translate lets users point a smartphone camera at a sign in another language and almost immediately obtain a translation of the sign in their preferred language.(6)
  • The development of self-driving vehicles relies on computer vision to make sense of the visual input from a car’s cameras and other sensors. It’s essential to identify other cars, traffic signs, lane markers, pedestrians, bicycles and all of the other visual information encountered on the road.
  • IBM is applying computer vision technology with partners like Verizon to bring intelligent AI to the edge, and to help automotive manufacturers identify quality defects before a vehicle leaves the factory.

Many organizations don’t have the resources to fund computer vision labs and create deep learning models and neural networks. They may also lack the computing power required to process huge sets of visual data. Companies such as IBM are helping by offering computer vision software development services. These services deliver pre-built learning models available from the cloud — and also ease demand on computing resources. Users connect to the services through an application programming interface (API) and use them to develop computer vision applications.

IBM has also introduced a computer vision platform that addresses both developmental and computing resource concerns. IBM Maximo Visual Inspection includes tools that enable subject matter experts to label, train and deploy deep learning vision models — without coding or deep learning expertise. The vision models can be deployed in local data centers, the cloud and edge devices.

While it’s getting easier to obtain resources to develop computer vision applications, an important question to answer early on is: What exactly will these applications do? Understanding and defining specific computer vision tasks can focus and validate projects and applications and make it easier to get started.

Here are a few examples of established computer vision tasks:

  • Image classification sees an image and can classify it (a dog, an apple, a person’s face). More precisely, it is able to accurately predict that a given image belongs to a certain class. For example, a social media company might want to use it to automatically identify and segregate objectionable images uploaded by users.
  • Object detection can use image classification to identify a certain class of image and then detect and tabulate their appearance in an image or video. Examples include detecting damages on an assembly line or identifying machinery that requires maintenance.
  • Object tracking follows or tracks an object once it is detected. This task is often executed with images captured in sequence or real-time video feeds. Autonomous vehicles, for example, need to not only classify and detect objects such as pedestrians, other cars and road infrastructure, they need to track them in motion to avoid collisions and obey traffic laws.(7)
  • Content-based image retrieval uses computer vision to browse, search and retrieve images from large data stores, based on the content of the images rather than metadata tags associated with them. This task can incorporate automatic image annotation that replaces manual image tagging. These tasks can be used for digital asset management systems and can increase the accuracy of search and retrieval.

Rapidly unleash the power of computer vision for inspection automation without deep learning expertise.