Talk:Artificial intelligence

Page contents not supported in other languages.
Page semi-protected
From Wikipedia, the free encyclopedia
Article milestones
DateProcessResult
August 6, 2009Peer reviewReviewed

The redirect Age of AI has been listed at redirects for discussion to determine whether its use and function meets the redirect guidelines. Readers of this page are welcome to comment on this redirect at Wikipedia:Redirects for discussion/Log/2024 February 8 § Age of AI until a consensus is reached. Duckmather (talk) 23:06, 8 February 2024 (UTC)[reply]

The redirect Ai tool has been listed at redirects for discussion to determine whether its use and function meets the redirect guidelines. Readers of this page are welcome to comment on this redirect at Wikipedia:Redirects for discussion/Log/2024 February 9 § Ai tool until a consensus is reached. Duckmather (talk) 06:13, 9 February 2024 (UTC)[reply]

First paragraph

Hi, I saw that you made some modifications, Maxeto0910. Most of it looks good. But for the introduction, the version before the modifications looks more concise and all-encompassing. You had a clear sense of what the 3 main definitions are.

For the first sentence, I'm ok with the modifications, except that it's not clear that it is the "broadest sense".

For the second sentence, saying that AI is mainly about the automation of "tasks typically associated with human intelligence" looks pretty correct. But the part "through machine learning, it develops and studies methods and software which enable machines to perceive their environment and take actions that maximize their chances of achieving defined goals" seems to already focus on a particular type of AI, the kind of AI agent based on machine learning.

Anyone else has an opinion? Alenoach (talk) 00:32, 22 March 2024 (UTC)[reply]

Hello, I found the old introductory paragraph concerning the definitions to be too unspecific, uninformative and simply not detailed enough. I think that the reworded one gives readers more context and information for a better sense of understanding. The old one was probably easier to understand, yes, but not providing a deep and comprehensive understanding of the underlying principles, as it was too general, at least in my opinion. If you find the new introduction to not be concise enough or easy to understand for laymen, we could consider a "Introduction to artificial intelligence" article when we realize that we find it too difficult to strike a balance between comprehensibility and comprehensiveness, as we have for other complex topics such as evolution.
I wrote "broadest sense" to make clear that there are several definitions (AI as intelligent machines, a field of research, and self-learning machines), which also don't contradict each other. And "intelligence of machines" is arguably by far the most known, simplest and most basic definition of AI.
Sure, AI systems don't necessarily have to incorporate machine learning techniques, which enable them to continuously improve their performance, as they can also have a fixed level of performance which was entirely human-programmed instead of machine-learned. Nonetheless, it is definitely the focus of modern AI research, which I wanted to make clear by writing "focusing on". But I agree that this part could sound misleading to some readers who don't know this, causing them to wrongfully assume that this is the focus of all AI research. If you have any suggestions for making it clear that this is merely the main focus of most modern AI research without making it too complex, let me know.-- Maxeto0910 (talk) 00:46, 22 March 2024 (UTC)[reply]
Two changes I would like to make if it's okay with you. (1) Scratch "Machine learning". (Machine learning is still a subfield of AI, and other kinds of AI techniques (such as logic) will probably become important again as we try to make learning systems more verifiable, explainable and controllable.) (2) Scratch the reference to humans, out of respect for the long-running debate about "intelligence in general" vs. "human intelligence" (see section on "defining AI" in this article). Okay? --- CharlesTGillingham (talk) 21:27, 24 March 2024 (UTC)[reply]
The claim that AI focuses on the automation of intelligent behavior through machine learning is simply false, and the qualification "through machine learning" should be deleted. The contrast between human and machine intelligence is also false. It is contradicted, for example, by the material in such books as Levesque's "Thinking as Computation"[1] It is also entirely at odds with computational thinking, more generally. Robert Kowalski (talk) 15:05, 27 March 2024 (UTC)[reply]
I like the "defined goals" bit, as this is very much in line with Russell & Norvig. ---- CharlesTGillingham (talk) 21:28, 24 March 2024 (UTC)[reply]

Wiki Education assignment: IFS213-Hacking and Open Source Culture

This article is currently the subject of a Wiki Education Foundation-supported course assignment, between 30 January 2024 and 10 May 2024. Further details are available on the course page. Student editor(s): Kylezip (article contribs).

— Assignment last updated by KAN2035117 (talk) 02:16, 25 March 2024 (UTC)[reply]

A few cuts

This sentence was in a paragraph on a different topic. Could go in "Applications".

In 2019, Bengaluru, India deployed AI-managed traffic signals. This system uses cameras to monitor traffic density and adjust signal timing based on the interval needed to clear traffic.[2]

References

  1. ^ Levesque, H.J., 2012. Thinking as computation: A first course. MIT Press.
  2. ^ "AI traffic signals to be installed in Bengaluru soon". NextBigWhat. 24 September 2019. Retrieved 1 October 2019.

---- CharlesTGillingham (talk) 03:09, 25 March 2024 (UTC)[reply]

This paragraph has no sources and was misplaced. Could be adapted for the section "Regulations", but research and a rewrite would be necessary.

Possible options for limiting AI include: using Embedded Ethics or Constitutional AI where companies or governments can add a policy, restricting high levels of compute power in training, restricting the ability to rewrite its own code base, restrict certain AI techniques but not in the training phase, open-source (transparency) vs proprietary (could be more restricted), backup model with redundancy, restricting security, privacy and copyright, restricting or controlling the memory, real-time monitoring, risk analysis, emergency shut-off, rigorous simulation and testing, model certification, assess known vulnerabilities, restrict the training material, restrict access to the internet, issue terms of use.

---- CharlesTGillingham (talk) 03:09, 25 March 2024 (UTC)[reply]

This is undue weight on the period 1940-1956 -- we have to cover a lot more ground here. I've edited this down to just cover the two most notable: Pitts & McCullough and the Turing Test. This material could be integrated into the article History of AI, which doesn't cover Turing's work in this much detail.

Alan Turing was thinking about machine intelligence at least as early as 1941, when he circulated a paper on machine intelligence which could be the earliest paper in the field of AI – though it is now lost.[1]

The first available paper generally recognized as AI was McCullouch and Pitts design for Turing-complete artificial neurons in 1943 – the first mathematical model of a neural network.[2] The paper was influenced by Turing's earlier paper "On Computable Numbers" from 1936 using similar two-state Boolean neurons, but was the first to apply it to neuronal function.[1]

The term machine intelligence was used by Alan Turing during his life which was later often referred to as 'artificial intelligence' after his death in 1954. In 1950, Turing published the best known of his papers 'Computing Machinery and Intelligence', the paper introduced his concept of what is now known as the Turing test to the general public. Then followed three radio broadcasts on AI by Turing, the lectures: "Intelligent Machinery, A Heretical Theory", "Can Digital Computers Think?" and the panel discussion "Can Automatic Calculating Machines be Said to Think?" By 1956 computer intelligence had been actively pursued for more than a decade in Britain; the earliest AI programmes were written there in 1951–1952.[1]

In 1951, using a Ferranti Mark 1 computer of the University of Manchester, checkers and chess programs were written where you could play against the computer.[3]

References

  1. ^ a b c Copeland, J., ed. (2004). The Essential Turing: the ideas that gave birth to the computer age. Oxford, England: Clarendon Press. ISBN 0-19-825079-7.
  2. ^ Russell & Norvig (2021), p. 17.
  3. ^ See "A Brief History of Computing" at AlanTuring.net.

We report total investment, education and job openings. Cut this (total patents) because it's a bit out of date and the list was too long.

WIPO reported that AI was the most prolific emerging technology in terms of the number of patent applications and granted patents.[1]

References

  1. ^ "Intellectual Property and Frontier Technologies". WIPO. Archived from the original on 2 April 2022. Retrieved 30 March 2022.

CharlesTGillingham (talk) 14:24, 25 March 2024 (UTC)[reply]

We don't need this because it's not really part of the narrative. AI, like any science, is an international project. (And long experience at Wikipedia has taught me that anything that might be construed as nationalism will eventually cause bloat when other editors add contrary opinions.)

The large majority of the advances have occurred within the United States, with its companies, universities, and research labs leading artificial intelligence research.[1]

References

  1. ^ Frank (2023).

CharlesTGillingham (talk) 14:38, 25 March 2024 (UTC)[reply]