The world we live in now would have exceeded our wildest expectations if you asked people 20-30 years ago what the standard of technology would be like in this modern era. The huge driving force behind this expansive evolution is without a shadow of doubt, artificial intelligence (AI).
We will discuss the history behind the evolution of AI and how the algorithms behind the technology is regulated the maintain professional standards.
How do we define AI and algorithm?
Artificial intelligence is intelligence demonstrated by machines rather than natural intelligence which is depicted through humans and animals. AI is the theory and development of computer systems being able to perform tasks that would usually require human intelligence. These tasks can include aspects of visual perception, decision-making, and speech recognition just to name a few.
An algorithm, in respect of machine learning, is the method by which the artificial intelligence system conducts its task, with a general function to predict output values from given input data. There are two main processes typically constitute machine learning algorithms – classification and regression.
Innovative scientists and researchers have long thought and known about the power artificial intelligence can provide and just how valuable it could promise to be in growing the global economy through innovation.
Despite this, it is no secret that AI is very complex and the building blocks took a while to come up with and develop in complexity overtime.
The following is a timeline showing some of the key, landmark events across the 20th century which saw the emergence of artificial intelligence and how it evolved from effectively just an idea to the first publicly available speech recognition software just before the turn of the 21st century:
- 1949 – The first sorted program computer is invented (Machester Mark 1)
- 1955 – The first Artificial Intelligence program is created, Logic Theorists
- 1963 – DARPA funds AI at Massachusetts Institute of Technology (MIT)
- 1965 – Moore’s Law
- 1968 – Predictions from Arthur Clarke and Steve Kubrik, “By the year 2001 we will have machines with intelligence that matched or exceeded human’s”
- 1986 – Navlab (first autonomous car) built by Carnegie Melon
- 1997 – Deep Blue (computer) defeats Gary Kasparov at chess. Dragon Systems release the first publicly available speech recognition software
Regulating algorithm creation
Companies strive for the best AI products and software, and in order for this to happen, the algorithms have to be extremely impressive and near-faultless. Many companies evaluate and assess the standard of algorithms companies are producing, one of these being NIST – The National Institute of Standards and Technology.
They provide assessment in numerous disciplines of AI including FRVT (Facial Recognition Vendor Test), PFT (Proprietary Fingerprint Template), and MINXEX (The Minutiae Interoperability Exchange Test).
After seeing what has happened regarding AI in the past few decades, it is very intriguing to think where it will be in the following few decades. Let’s watch this space!
Data Analytics in Revenue Cycle Management (RCM): Leveraging Dental Software for Insights
The healthcare industry has been undergoing a major transformation in recent years, with a growing emphasis on data-driven decision-making. This…
What Factors Influence Cyber Security Salary Growth?
As everything from our public activities to important organization information moves on the web, network safety has, in practically no…