Synthetic Telepathy

-Microcircuits The Interface between Neurons and Global Brain Function-

Artificial Intelligence

Artificial Intelligence

History

Artificial Intelligence (AI) is a diverse field of R&D dealing with machine intelligence. Its underlying concepts reach back many centuries into Greek philosophy and mathematics. The idea of universal computation has been discussed in connection with concepts for computational devices since at least the times of Charles Babbage and Ada Lovelace. Alan Turing described a theoretical computing machine in 1936/7 (Turing, 1936; 1938) while the brain was first described as a network of interconnected neurons by McCulloch and Pitts (McCulloch & Pitts, 1943). Artificial Intelligence was “christened” and launched as a large-scale scientific endeavour at the so-called “Dartmouth Conference” in the summer of 1956. The first wave of enthusiasm for AI encouraged radical predictions, such as the existence of machines with intelligence like a human’s within a single generation. The 1950s also saw first experiments with machines based on the neural network concept, so-called perceptrons developed by Rosenblatt (Rosenblatt, 1959). The challenges faced by AI were generally under-estimated by its champions and when promised results failed to materialise, interest necessary to ensure funding of research waned. Following the publication of the book “Perceprtons” by Minsky and Papert in 1969 (Minsky & Papert, 1969), work on neural networks entered a period of stagnation. In their book, Minsky and Papert showed that the perceptron could not distinguish between certain simple patterns. The result of disillusionment was a so-called AI winter which lasted until Japan launched an ambitious program for the development of the “fifth” Generation of computers with strong focus on AI and advances in “expert systems” gave new hope that AI would be able to realise its ambitious goals after all. There followed another “golden age” of AI with renewed optimism and radical predictions, which ended when it became obvious that the predictions would not be realised and that existing AI systems had limitations. AI is currently experiencing renewed interest, for reasons described later.

As mentioned in the first paragraph, the field of AI is quite diverse and can range from a robotics perspective to an expert systems perspective including a neuroscience-oriented perspective to mention a few. The common denominator across the diverse fields, however, is the creation of machines that possess or emulate human like intelligence. This article does not claim to provide a comprehensive description of the field but rather gives a restricted and perhaps narrow perspective of AI that the ETICA project considers likely to emerge and be more visible in the medium term of 10 to 15 years. In addition, as AI is one of 12 identified emerging technologies, some perspectives of this technology are likely to have been covered in one or more of the identified technologies.

As such the field can roughly be divided into so-called “strong AI”, which aims to develop machine intelligence equal or superior to human intelligence, and “applied AI”, (sometimes also known as “weak AI”) which focuses on applications of machine intelligence for specific tasks and services. Applied AI is already widely used in such fields as e-business, mobility and security technologies and may be having a strong impact on our society and culture. Today, successful AI applications range from custom-built expert systems to mass produced software and consumer electronics.

The research programme of strong AI has goals whose realisation would have fundamental impacts on our self-understandings as homo sapiens, since humans would create a real partner “species” or even a potential successor of humanity. While pioneers of strong AI, such as Marvin Minsky (Wolff, 2006), portray the last decades of the twentieth century (in which applied AI was dominant) as wasted time for AI, the proponents of applied AI argue that the unrealistic and far-reaching claims of Minsky and others have discredited the field in the view of the public and of decision makers. It is of particular interest that there appears to be a renaissance of strong AI in the 2000s, since the development of strong AI would be technology becoming a pendant of biology which may “live” and evolve in a similar way as biological species and together with humanity.

Application Areas/Examples

Application examples include:

  • Software agents
  • Artificial brains in “artificial people”
  • Artificial Intelligence chips
  • Control system for robots
  • Expert Systems
  • Fuzzy Logic
  • Artificial Neural Networks
    Data Mining

Current AI approaches are quite diverse and the products of AI research have frequently been incorporated into larger systems not generally tagged as AI. These include data mining, industrial robotics, speech recognition and even the Google search engine. Some AI has trickled into the mainstream of computer science, but due to the traumatic experience of the AI winters, researchers avoid the term for fear of being identified with the radical visions linked with the strong AI approach.

The Korean 2025 Vision foresees that “(t)he development of artificial intelligence and information communicative devices will make it possible to lead to a comfortable and automated home and society” (Korean Government, 2000, p. 13). This is obviously related to ambient intelligence.

New areas are less concerned with the business of making computers think, focusing instead on what can be referred to as “weak AI”, the development of practical technology for modelling aspects of human behaviour (OFCOM 2007). In this way, AI research has produced an extensive body of principles, representations, and algorithms. Today, successful AI applications range from custom-built expert systems to mass-produced software and consumer electronics.’ (Arnall, 2003).

Software Agents in e-commerce

A specific application of AI research is “software agents”, with research on the topic starting as far back as the late 1980s. Today, software agents are mainly understood as programs that are able to work independently (autonomous), are able to react to changes in their environment (reactive), are able to act proactively and can communicate with other software agents. In the early 1990s the term “software agents” was used quite broadly, covering different areas of research in software technology. Uses include routine tasks such as complex searches for information in libraries and network attached storages and processing of (digital) products in e-commerce. Software agents on one hand need context awareness to understand e.g. the wishes of their users, on the other hand they need a certain type of “intelligence” to be able to make decisions. An important area of use for software agents today are simulations in science and computer games. In this context a number of highly specialised agents have been developed and are in use. Software agents may play a major role in ambient intelligent environments to search autonomously for information, to evaluate them and to draw conclusions including adaptive decision making. Research on software agents is carried out in the private sector (mainly enterprises) and by public research institutions (mainly universities) world wide. In Europe coordination of stakeholders is supported by the EU in the context of the IST program.

Artificial brains

AI research is also looking into the application area of artificial brains. Human-like functions will emerge from artificial brains based on natural principles. The resonant tunneling diode possesses characteristics similar to the channel proteins responsible for much of neurons’ complex behaviour. Higher functions of the brain are emergent properties of its neurointeractivity between neurons, collections of neurons and the brain and its environment. “Artificial people” will be very human-like given that their natural intelligence will develop within the human environment over a long course of close relationships with humans; artificial humans will not be any more like computers than humans are – they will not be programmable or especially good at computing; artificial people will need social systems to develop their ethics and aesthetics. Human-brain interface could be based on minimally invasive nano-neuro tranceivers. The nature of communication should be based on the same neural fundamentals as an artificial brain. Humans might find enhancement via such paths risky.

Artificial Intelligence Chips

At a slightly less ambitious level, but already expected by 2025 are Artificial Intelligence chips, enabling computers to understand human feelings, possibly even to read information from the brain using electro-magnetic information (Korea 2000). AI systems are also used to control robots, particularly those operating in environments inhabited by humans and other life forms.

Control system for robots

Pires (2007) state that Robot control systems are electronic programmable systems responsible for moving and controlling the robot manipulator, providing also the means to interface with the environment and the necessary mechanisms to interface with regular and advanced users or operators. (p. 86). This can be used to control robots in unforeseen circumstances such as when an error occurs. A control system application will enable the functionality of the robots.

Expert Systems

Expert systems are in many forms and therefore can be applied in many areas. These include expert systems for business or medical purposes among others. Expert systems are capable of high level processing and interpretation of data. Engelmore & Feigenbaum (1993) state that ”AI programs that achieve expert-level competence in solving problems in task areas by bringing to bear a body of knowledge about specific tasks are called knowledge-based or expert systems. Often, the term expert systems is reserved for programs whose knowledge base contains the knowledge used by human experts, in contrast to knowledge gathered from textbooks or non-experts. Taken together, they represent the most widespread type of AI application”[1]. An example of where such an application works is shown by Malone (1993) who outlines how experts systems can be applied in accounting, particularly for “tax, auditing, financial modeling, managerial decisionmaking, personnel selection, accounting education and training, and decision reinforcement” (p.1) purposes. In relation to medical purposes, Mauno & Crina (2008) discuss how medical expert systems can be used to assist clinicians when it comes to “the diagnostic processes, laboratory analysis, treatment protocol, and teaching of medical students and residents.” (p.1).

Fuzzy Logic

Fuzzy logic is that part of machine intelligence that allows computers to define and/or measure imprecise or “fuzzy” elements. Kosko and Isaka (1993) give an example of how fuzzy logic works by stating that “fuzzy logic manipulates such vague concepts as “warm” or “still dirty” and so helps engineers to build air conditioners, washing machines and other devices that judge how fast they should operate or shift from one setting to another even when the criteria for making those changes are hard to define” (p.76). In this case, a fuzzy logic application will be designed in such a way as to allow multi-factors in its functionality so that it can be able to control a range of values. Mammar & Chaker (2009) demonstrate fuzzy logic in fuel cell power generation in a residential setting. For this situation fuzzy logic was used to control the active and reactive power load variation.

Artificial Neural Netwokrs (ANN)

ANN is a computational model that tries to simulate natural neurons. Tan (undated) suggests that ANN can be used to analyse risk and solve complex tasks. Tan gives an example of how ANN might work in finance and goes on to state that “Dealing with uncertainty in finance primarily involves recognition of patterns in data and using these patterns to predict future events. Accurate prediction of economic events, such as interest rate changes and currency movements currently ranks as one of the most difficult exercises in finance; it also ranks as one of the most critical for financial survival.” (p. 4). Krieger (1996) has also indicated that ANN is very prevalent in data mining due to what she terms their “model-free” estimators and their dual nature, neural networks serve data mining in a myriad of ways. (Krieger, 1996, p.2 -3). Finally, a strong faction of AI researchers and champions expects the realisation of a “singularity” according to Ray Kurzweil’s vision[2](Kurzweil, 2005) – that technology exceeding the capacity and processing power of the human brain should be technologically feasible within the foreseeable future. Some of them predict major cultural, societal and political disruptions, or even a global war, resulting from these developments. (Garis, 2005).

Definition and Defining Features

Definition

It is difficult to give a universally accepted definition of artificial intelligence, since a major feature is the display behaviour acknowledged as evidence of the existence of “intelligence” in humans or other natural beings. Intelligence here is the result of computer programmes. There are distinctly different approaches to the realisation of AI, starting with the divide between “strong AI”, which claims to understand intelligence in humans and other living beings and “weak AI”, which more modestly seeks to find processes which lead to the same results as intelligence in natural beings. In both cases, the products of research result in computer programmes representing intelligent processes. It is clear from this that its most defining feature is intelligence.

A major problem of artificial intelligence is that the concept of intelligence itself is by no means clear and has changed over time. New discoveries from research on human intelligence have had impact on research on “artificial intelligence”. A working definition of intelligence is provided by Hutter and Legg (2005): “[i]ntelligence measures an agent’s ability to achieve goals in a wide range of environments” (Hutter & Leggs, 2005). This implies ability to learn, adapt and “understand” or infer.

Modern concepts of intelligence employed in the cognitive sciences and AI research no longer restrict intelligence to the manipulation of symbols (rationality), but also to embrace further components, including creativity, problem solving, pattern recognition, classification, learning, induction, deduction, building analogies, optimization, surviving in an environment, language processing, planning and knowledge. Research suggests that there is no real separation between rational thinking and emotion in human beings which provides a justification for the rise of the related field of “affective computing” (Picard, 1995).

Since the late 1980s, there has been a completely new approach derived from robotics which builds on the belief that a machine needs to have a body moving around in the world to display true intelligence (nouvelle AI). This is linked with a rejection of the division of cognition in layers (perception – recognition –planning- execution – actuation). This is replaced with the notions of reactivity and situatedness: here intelligence stems from a tight coupling between sensing and cognition. Similarly nouvelle AI also rejects the notion of parameterising the functional parts of robotic artefacts. It gives “the evolutionary code full access to the substrate” and thus “the search procedure does without conventional human biases, discovering its own ways to decompose the problem – which are not necessarily those human engineers would come up with.”[3]. According to this approach, intelligence is an emergent property.

Defining Features

Intelligence: From the definition given above, it is clear that one of AI’s defining features is intelligence. This is key to the technology especially as technologies in this realm are expected to think in a human like form and as such have human intelligence.

Time Line

Computer systems that display rudimentary to complex intelligence are expected within the next 15 years (Korea Vision 2025). Kurzweil predicts a “Singularity” within 30 years (approximately).

Progress in Neuroscience fuelled by new and advanced brain imaging techniques to visualise processes in the human brain and to link these with specific types of human thinking is expected to provide a basis for rapid advances in AI.

It will be possible to build computers of ever-increasing size and power thanks to the continued validity of “Moore’s Law”. Nanotechnology holds the promise for a further continuation in the future when silicon technology reaches its limits. Kurzweil estimates that human brain capacity is 100 billion neurons, with 1000 connections per neuron, conducting 200 calculations per second. Thus computers possessing the capacity of one human brain for $1000 dollars should be available by 2023. The price should drop to a single cent by 2037. A computer with the brain capacity of the human race for $1000 should be available by 2049, with the price dropping to a single cent by 2059 (Kurzweil, 2001).

The computer metaphor for the brain has been sufficiently powerful and pervasive over a period of time to inform the mainstream of AI research: roughly speaking, the goal of this kind of AI has been to construct an artificial brain working to the same principles as the human brain. While there always have been other approaches to artificial intelligence, such as programming processes that achieve results like those of human brains without claiming to actually model brain processes, the science fiction author and former prominent computer scientist Vernor Vinge is still optimistic about the computer-model approach: “Much of this research will benefit from improvements in our tools for imaging brain functions and manipulating small regions of the brain” (Vinge, 2008).

Relation to other Technologies

Artificial intelligence itself draws strongly on results from the cognitive sciences, with much recent interest in neural imaging techniques. There are strong ties with robotics, since AI is a crucial element in the development of robots. However, more recent approaches to AI have relied on robotics to provide the “body” needed for intelligent behaviour to emerge from interaction with the body’s environment. AI is also an essential component in ambient intelligence and in internet technology.

Critical Issues

‘…higher cognitive processes such as decision taking, learning and action still pose major challenges. Despite progress in some areas within cognitive systems and models, the provisional conclusion is that many hurdles still have to be overcome before an artificial system will be created which approaches the cognitive capacities of humans’ (European Technology Assessment Group, 2006).

‘Associated with this reality check is the recognition that classical attempts at modeling AI, based upon the capabilities of digital computers to manipulate symbols, are probably not sufficient to achieve anything resembling true intelligence. This is because symbolic AI systems, as they are known, are designed and programmed rather than trained or evolved. As a consequence, they function under rules and, as such, tend to be very fragile, rarely proving effective outside of their assigned domain. In other words, symbolic AI is proving to be only as smart as the programmer who has written the programmes in the first place’ (Arnall, 2003).

The following ethical issues are mentioned in the database in relation to AI.

- Autonomy and rights ‘Many of the major ethical issues surrounding AI – related development hinge upon the potential for software and robot autonomy. In the short term, some commentators question whether people will really want to cede control over our affairs to an artificial intelligent piece of software, which might even have its own legal rights. While some autonomy is beneficial, absolute autonomy is frightening. For one thing, it is clear that legal systems are not yet prepared for high autonomy systems, even in scenarios that are relatively simple to envisage, such as the possession of personal information. In the longer-term, however, in which it is possible to envisage extremely advanced applications of hard AI, serious questions arise concerning military conflict, and robot take-over and machine rights’ (Arnall, 2003, p. 57).

- Robots overtaking humankind There has always been a strong faction of AI researchers seeking to create an artificial intelligence superior to that of humans. One proponent of this position, Hans Moravec, called a popular book “Mind Children” (Moravec, 1998), arguing that AI would one day inherit the positions of humans as the most powerful intelligence on earth. The idea of a super-intelligence is also at the root of the “singularity”, popularized by Ray Kurzweil (2005): technology exceeding the capacity and processing power of the human brain should be technologically feasible within the foreseeable future. Other approaches are seeking to enhance human beings with results of research in AI and robotics. A well known proponent of the approach is Kevin Warwick, Professor of Cybernetics at the University of Reading, UK.

References

Academic publications

Garis de, H. (2005). The Artilect War. Palm Springs: ETC Publications.

Hutter, M., Legg, S. (2005): A universal measure of intelligence for artificial agents. Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI), Edinburgh, July 30 to August 5, 2005. San Francisco: Morgan Kaufmann publishers. Pp. 1509-1510.

Krieger, C. (1996) Neural Networks in Data Mining. Retrieved on 28 June 2010, from http://www.cs.uml.edu/~ckrieger/user/Neural_Networks.pdf

Kosko, B. & Isaka, S. (1993) Fuzzy Logic. In Scientific American, July 1993. Retrieved on 28 June 2010, from http://www.beopnix.net/~ulisescastro/IA/FuzzyLogicSA.pdf

Kurzweil, R. (2005). The Singularity is Near. When humans transcend biology. New York: Viking

Malone, D. (1993) Expert systems, artificial intelligence, and accounting. Journal of Education for Business, 08832323, Mar/Apr93, Vol. 68, Issue 4

Mannar. K & Chaker, A. (2009) Fuzzy logic -based control of power of PEM fuel cell system for residential application. In Leonardo Journal of Sciences, Issue 14, January-June 2009, p. 147-166. Retrieved on 28 June, 2010, from http://ljs.academicdirect.org/A14/147_166.pdf”>http://ljs.academicdirect.org/A14/147_166.pdf

Mauno, V. & Crina, S. (2008) Medical Expert Systems. In “Current Bioinformatics”. Vol. 3, No. 1, January 2008 , pp. 56-65(10). Bentham Science Publishers

McCulloch, W. and Pitts, W. (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 7:115 – 133

Minsky, M. & Papert, S. (1969). Perceptrons, MIT Press, Cambridge.

Moravec, H. (1988. Mind Children. Cambridge Mass.: Harvard University Press

Picard, R.W. (1995). Affective Computing. MIT Media Laboratory Perception Computing Section Technical Report No. 231, Revised November 26, 1995

Pires, N. J (2007). Industrial Robots Programming: Building Applications for the Factories of the Future.

Rosenblatt, F. (1958). The perceptron: a probabilistic model of information storage and organization in the brain, Psychological Review, 65, pp. 386-408.

Tan, C.N.W. (Undated) An Artificial Neural Networks Primer with Financial Applications Examples in Financial Distress Predictions and Foreign Exchange Hybrid Trading System. Retrieved 29 June 2010 from http://www.smartquant.com/references/NeuralNetworks/neural28.pdf

Turing, A.M. (1936) “On Computable Numbers, with an Application to the Entscheidungsproblem”, Proceedings of the London Mathematical Society: 230–65, 1937, doi 10.1112/plms/s2-42.1.230

Turing, A.M. (1938). “On Computable Numbers, with an Application to the Entscheidungsproblem: A correction”, Proceedings of the London Mathematical Society, 2, 43: 544–6, 1937, doi 10.1112/plms/s2-43.6.544 10.111/plms/s2-43.6.544

Vinge, V. (2008). Signs of the Singularity. IEEE Spectrum Special Issue: The Singularity, June 2008

Wolff, P. (2006): Ewig in der Zukunft (interview with Marvin Minsky). Süddeutsche Zeitung 139/2006, 16.

Governmental/regulatory sources

European Technology Assessment Group. (2006). Technology Assessment on Converging http://www.europarl.europa.eu/stoa/publications/studies/stoa183_en.pdf

Korean Government. (2000). Vision 2025 Taskforce – Korea’s long term plan for science and technology development. Retrieved February 16, 2010, from http://www.inovasyon.org/pdf/Korea.Vision2025.pdf

Web Sites/Other sources

Arnall, A. H. (2003). Future Technologies, Today’s Choices. Nanotechnology, Artificial Intelligence and Robotics; A technical, political and institutional map of emerging technologies. Retrieved December 28, 2009, from http://www.greenpeace.org.uk/MultimediaFiles/Live/FullReport/5886.pdf

Engelmore, R. S. & Feigenbaum, E. (1993) Expert Systems and Artificial Intelligence. In Knowledge-based systems in Japan. Japanese Technology Evaluation Centre. Retrieved on 23 June, 2010, from http://www.wtec.org/loyola/kb/c1_s1.htm

Kurzweil, R. (2001): The Law of Accelerating Returns. http://www.kurzweilai.net/articles/art0134.html


[1]http://www.wtec.org/loyola/kb/c1_s1.htm

[2]Kurzweil, R. (2005): The Singularity is Near. When humans transcend biology. New York: Viking

[3]http://www.cs.brandeis.edu/~pablo/thesis/html/node22.html, accessed 10 March 2010.

http://moriarty.tech.dmu.ac.uk:8080/index.jsp?page=681764

Posted in Information and communication technology 13 years, 2 months ago at 15:50.

Add a comment

Previous Post:   Next Post:

No Replies

Feel free to leave a reply using the form below!


Leave a Reply

*