Talk:Artificial neuron

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

There seems to be at least a little overlap between this article and perceptron. Perhaps a partial merge of the overlapping info and some cross-links would be a good idea? --Delirium 23:04, Oct 25, 2003 (UTC)

Exactly what I was about to point out myself. Maybe this article should be whittled down to the key concepts of an artificial neuron per se (biological basis etc), and Perceptron expanded to cover the specifics of the McCullough-Pitts implementation - after all, there are other kinds of neural net, which therefore contain other kinds of neuron. Indeed, some of the comments here (such as the values being boolean) arguably don't even generalise over all perceptrons. Once I've finished defining them for my coursework, I'll try and sort out the various articles here. - IMSoP 18:24, 11 Dec 2003 (UTC)

What does this article want to tell us?[edit]

Once u guys are clear about what u want to describe here, u may want to fix the interwiki links to either de:Künstliches Neuron or de:McCulloch-Pitts-Zelle --chrislb 问题 07:57, 4 July 2006 (UTC)[reply]

citation needed[edit]

Where does that criticism come from -- namely the one about artificial neurons not having multiple output axons? I've never come across that, pls provide citation!

W0 to Wm inputs is m+1 inputs, not m[edit]

I found this article researching something else. w0 through wm makes w an array with m + 1 elements. I would think that the first sentence under Basic Structure should be either:

For a given artificial neuron, let there be m + 1 inputs with signals x0 through xm and weights w0 through wm.

or

For a given artificial neuron, let there be m inputs with signals x0 through xm - 1 and weights w0 through wm - 1.

I don't want to make this change as A) it might be correct in Engineersp33k and B) I don't have the expertise to know how this might change other parts of the discussion that follows. TechBear 17:03, 24 October 2007 (UTC)[reply]

Synapse?[edit]

Shouldn't the output be axon, since it refers to input as dendrites? Just wondering... --Bobianite (talk) 02:00, 12 April 2008 (UTC)[reply]

Criticism[edit]

The criticism that artificial neurons are not biologically plausible is obvious - thats why they are called artificial. I am not sure if Izhikevich wants to be cited for pointing out the obvious. I would dismiss the "criticism" section, since it suggest that there is an argue whether the artificial neuron is biologically plausible or not. A section discussing what such an artificial neuron could tell about real biology would be more fruitful, e.g. the capacity of neurons. —Preceding unsigned comment added by 138.245.96.20 (talk) 16:53, 5 March 2009 (UTC)[reply]

Traing[edit]

The article refers to "training" in the example algorithm and the following spreadsheet, however no details of how an artificial neuron might be trained are given. The text above the example says that there is more than one way. Perhaps at least one method should be described as it seems to me that the article makes little sense without such a description. I can guess that the process involves trying various inputs and adjusting weights until the required outputs are achieved, but it seems to me that unless this is done very carefully that the process might not even converge. It also begs the question that, if you know what outputs are required for given inputs (as in the example of the logical or function) then there are much simpler ways of implementing the required function without any training being required. Presumably the utility of artificial neurons comes from the fact that there must be some way of implementing training in cases where the required outputs are not known. The article would be much more useful if someone could describe how this is done.86.138.14.83 (talk) 19:53, 17 November 2010 (UTC)[reply]

nonlinear combination function[edit]

Some explanation of why a nonlinear combination function is needed to get a multilayer network that can't be reduced to a single layer network would be nice — Preceding unsigned comment added by 204.17.143.10 (talk) 22:18, 19 June 2012 (UTC)[reply]

Image[edit]

The image is pretty ugly and I want to replace it, but has different numbering and different threshold/bias. Later in the article it says which is simpler but inconsistent with the earlier image. Same with the pseudocode numbering. It would be nice to make them all consistent and use a prettier picture.

http://neuralnetworksanddeeplearning.com/chap1.html also counts from 1. — Omegatron (talk) 05:25, 6 January 2018 (UTC)[reply]

Types of Transfer Functions section[edit]

In the "Types of Transfer Functions" section, there is a link to a "transfer function" main article, but this is a link to the type of transfer functions used on linear systems. There is a sentence in the first section for this page that warns against confusing this type of linear system transfer function with an artificial neuron's transfer function. I'd strongly recommend removing this link, or changing it to point to the Activation Function main article (which is linked in the first sentence of the "Types of Transfer Functions" section. 198.102.151.243 (talk) 15:56, 6 July 2023 (UTC)[reply]