The age of A.I (Artificial Intelligence)

In the modern era of science and technology a new approach for doing tougher jobs and activities humans have developed a new technology “AI” or “Artificial Intelligence”. In this method machines show their way of approach of doing tasks as humans do by their natural intelligence which is naturally gifted to them. The A.I, however is an acronym for the word “Artificial Intelligence”. It was categorized as an independent field of study and research in the year, and in the years since has been more explored and optimized to make it more useful and efficient for human help.

In the recent years the A.I has been developed rapidly and has added many functions, problem solving, reasoning, planning, learning, manipulating and working on various set of problems like moving an object, or breaking it, or throwing it. These kind of activities can be done by the so called “intelligent” machines which have been trained or “artificially taught” the algorithms to perform those activities. It has been drastically developed by the top tech giants like Apple Inc., Google, Microsoft, etc in the past few years and have growth of over 35%. That means the age of A.I has grown over the decade and it will be further developed more.

The field was primarily founded for the achievement of a virtual system that can work on problems using logic and algorithms to complete a task that would match the approach of humans and be similar to the human approach to solve the particular problem.

In the twentieth-first century, A.I has experienced a resurgence following concurrent advances in computer power, large amount of data, and theoretical understanding, and A.I has become more important in the tech industry and even in daily life. Like for example we could pay using online payment applications like Venmo, Google pay, which use the QR code to recognize the ID of the merchant or the user of that application. And this entire process is made easy by the A.I. This entire technology is made by the help by the autonomous approach which is used by the A.I.

The A.I research was begun in a workshop of Dartmouth College in !956, where the term “Artificial Intelligence” was coined by John McCarthy to distinguish the branch of cybernetics and evolve from the influence of cyberneticist Nobert Wiener. Attendees Allen Newell (CMU), Herbert Simon (CMU), John McCarthy (MIT), Marwin Minsky(MIT) and Arthur Samuel(IBM) became the founders of the research and development of A.I. They and their students proposed many programs that press claimed to be “astonishing”: computers were learning checking techniques (and by 1959 were playing better than humans), solving word problems in algebra, proving logical theorems and speaking languages (English firstly). By the mid 1960s, research in the United States was heavily funded by the Department of Defense and laboratories had been established all around the world. The founders of the A.I were very optimistic regarding the future with the developed by the foundation of A.I. Herbert Simon said that ” Machines would be capable of doing anything that a human would do in the coming 20 years.” That has been proved true because now the artificial intelligent systems are being set up by everyone to ease their task completion in almost every field.

But people failed to understand and understand the difficulty of A.I of some remaining projects and progress decelerated in the year 1974 and in response to the criticism of Sir James Lighthill and ongoing pressure from U.S Congress to fund more productive projects, both the United States and the British Government stopped funding for the research and exploration in the field of A.I. The next few years were known as “A.I Winter”, a period when funding raising and obtaining for the development of A.I was quite tough. And there was not more significance towards the A.I development so it was quite a slow decade for the A.I.

Then, in 1980s the A.I research started to flourish and increase and by then the A.I was raised due to the commercial success of Expert Systems, a integral program of that simulated the knowledge and analytical skills of human experts. By the year 1985 the market for the A.I had reached about a Billion Dollars. That’s quite massive cause the A.I was not given much importance by the US and the British Government in the late 1960s and in just 10 years it had boomed. This was a great leap towards the creation of a virtual environment in the world to help humans to the difficult tasks easily by the help of machines which have been trained and are artificially intelligent. At the same time, Japan’s fifth generation computer inspired the global world to invest in the academic research of A.I and so it did.

The development of metal-oxide-semiconductor(MOS), very large scale integration(VLSI), in the form of complementary MOS (CMOS) transistor technology enabled the development of practical artificial neural network (ANN) technology in the 1980s. In the early 1990s and early 21st century A.I began to be used for statistics, data storage and management, logistics, data mining, medical diagnosis and many more fields. This rise of A.I was due to its computational and logical skills by which artificial systems automatically worked. And this led to some more interdisciplinary fields and branches which included Robotics, Machine Learning and more.

According to Bloomberg’s Jack Clark, 2015 was a landmark for the development of A.I with the increasing number of A.I projects that used Google AI increased heavily. This lead to mark the beginning of the Age of A.I.

Leave a comment

Design a site like this with WordPress.com
Get started