Artificial Intelligence Abstracts

Artificial Intelligence Abstracts

( 7 diagrams , 11 references ) 90-51787 Unsupervised Learning by Backward
Inhibition , Tomas Hrycej , PCS Computer Systeme , Munich , FRG , IJCA1-89
11th Intl Joint Conf on Al , Detroit , MI , Aug 20-25 , 89 , p170 ( 5 ) cont paper
Backward inhibition in a two - layer connectionist ... networks of neuron - like
units , with each unit connected to a chosen subset of units in the adjacent layers
, that learn by ...

Author:

Publisher:

ISBN: UOM:39015047375525

Category: Artificial intelligence

Page:

View: 929

Categories: Artificial intelligence

Algorithmic Learning Theory

Algorithmic Learning Theory

Example 2 . Consider the 4 - Node architecture a = ( 2 , 2 ) ; it has a hidden layer
with 2 nodes , and an output layer with 2 nodes . Since there are two output
nodes , there are four possible output values : 00 , 01 , 10 and 11 . The first layer
of ...

Author:

Publisher:

ISBN: UOM:39015034830359

Category: Computer algorithms

Page:

View: 955

Categories: Computer algorithms

World Congress on Neural Networks San Diego

World Congress on Neural Networks  San Diego

For the training of the output layer , the training set also contained the desired
output function . ... 2-1 ( 11 ) This learning algorithm ensures that a given hidden
unit will learn to ignore inputs which do not contribute to the unit's firing .

Author: Inns

Publisher: Psychology Press

ISBN: 080581745X

Category:

Page: 821

View: 585

Categories:

Neural and Synergetic Computers

Neural and Synergetic Computers

Each connection change depends only on the statistics which are collected in
parts I and II about the two units ... The simulation time on a VAX 11/750 for a
learning cycle is approximately 6 min. ... The term "feedforward” implies that no
processing output can be an input for a processing element on the same layer or
a ...

Author: Hermann Haken

Publisher: Springer Science & Business Media

ISBN: 9783642741197

Category: Science

Page: 263

View: 370

Neural and Synergetic Computers deals with basic aspect of this rapidly developing field. Several contributions are devoted to the application of basic concepts of synergetics and dynamic systems theory to the constructionof neural computers. Further topics include statistical approaches to neural computers and their design (for example by sparse coding), perception motor control, and new types of spatial multistability in lasers.
Categories: Science

Proceedings of the Midwest Symposium on Circuits and Systems

Proceedings of the     Midwest Symposium on Circuits and Systems

number of hidden - layer units , the redundancy in feature recognition for this
network is rather large . This seems to be ... The most important result of the
preliminary training runs has been that for = 5 the network did not train properly (
runs 9-11 ) . ... With 1 = 2 ensuring slower learning the problem of excessive
errors is reduced , but at least several bit errors have consistently occurred ( runs
5-8 ) . Finally ...

Author:

Publisher:

ISBN: UOM:39015023307401

Category: Electric circuits

Page:

View: 852

Categories: Electric circuits

Emerging Intelligent Computing Technology and Applications

Emerging Intelligent Computing Technology and Applications

Using the difference between ()xtt+ and ˆ()xtt+, BP algorithm [2] is executed to
tuning weights of each RBM again. ... The number of layers, the number of units
of every layer, learning rate and so on need to be decided when the model is
applied to real problems. ... neural networks work well for nonlinear data
prediction, they drop learning ability when linear factor exists strongly in the time
series [11].

Author: De-Shuang Huang

Publisher: Springer

ISBN: 9783642318375

Category: Computers

Page: 509

View: 325

This book constitutes the refereed proceedings of the 8th International Conference on Intelligent Computing, ICIC 2012, held in Huangshan, China, in July 2012. The 242 revised full papers presented in the three volumes LNCS 7389, LNAI 7390, and CCIS 304 were carefully reviewed and selected from 753 submissions. The papers in this volume (CCIS 304) are organized in topical sections on Neural Networks; Particle Swarm Optimization and Niche Technology; Kernel Methods and Supporting Vector Machines; Biology Inspired Computing and Optimization; Knowledge Discovery and Data Mining; Intelligent Computing in Bioinformatics; Intelligent Computing in Pattern Recognition; Intelligent Computing in Image Processing; Intelligent Computing in Computer Vision; Intelligent Control and Automation; Knowledge Representation/Reasoning and Expert Systems; Advances in Information Security; Protein and Gene Bioinformatics; Soft Computing and Bio-Inspired Techiques in Real-World Applications; Bio-Inspired Computing and Applications.
Categories: Computers

Proceedings of the 1991 IEEE International Symposium on Intelligent Control

Proceedings of the 1991 IEEE International Symposium on Intelligent Control

It performs this unsupervised learning by using layers of processing units that
compete with one another , the resulting ... 11 ) also uses a substructure that is
similar to a basic competitive learning representation . dx = ( x ( t ) – oxi ( 0 ) 2 di =
G ...

Author: IEEE Control Systems Society

Publisher: Institute of Electrical & Electronics Engineers(IEEE)

ISBN: 0780301064

Category: Artificial intelligence

Page: 506

View: 600

Categories: Artificial intelligence

1995 IEEE International Conference on Neural Networks

1995 IEEE International Conference on Neural Networks

The relationship between input and output of each unit in the first layer is defined
as: '"1,-=Xi and "'0,-="",- , (13) where ... vi) ' (14) (2)0 = exp [YL nu] 1.1-1 K MK ' M2
, exp iY,...1 where W1" "5 = 0 (/1 = 1, ---1,11). lt should be noted that (15) can be
considered as a kind of generalized sigmoid functions. ... in the network by
learning only the weight coefficient wY""" between the first layer and the second
layer.

Author: IEEE Neural Networks Council

Publisher:

ISBN: UCSD:31822021494992

Category: Computers

Page: 3219

View: 819

Categories: Computers

Neural Computing for Structural Mechanics

Neural Computing for Structural Mechanics

11 ) where k represents an upper layer unit ( the output layer is the uppermost
and the input layer is the lowermost layer ) ... A Wst ( old ) is the previous weight
change between the same two units , n is the learning rate and a is the
momentum .

Author: B. H. V. Topping

Publisher:

ISBN: UOM:39015040050604

Category: Science

Page: 178

View: 687

Describing the application of artificial neural networks to structural mechanics, this book will be of interest to engineers, computer scientists and mathematicians working on the application of neural computing to structural mechanics and in particular finite element problems. It is accompanied by a voucher for a free software disk.
Categories: Science

Advanced Technologies for Solar Photovoltaics Energy Systems

Advanced Technologies for Solar Photovoltaics Energy Systems

In the training process of Case I and Case II, the network configuration and the
training parameters are given in Table 4. ... I and Case II Case I Case II 11 12 1 1
1 1 Input Dimension Output Dimension LSTM Layers Learning Rate Hidden Units
 ...

Author: Saad Motahhir

Publisher: Springer Nature

ISBN: 9783030645656

Category:

Page:

View: 917

Categories:

Genetic and Evolutionary Computation Conference

Genetic and Evolutionary Computation Conference

V1 Um Yi Yr ö LAYER 4 LAYER 3 Ri R2 Rw 1 LAYER 2 Ai A2 Aw LAYER 1 11 11
yu Уr Figure 1 : The structure of the RFV ... learning and includes an extra layer of
units with recurrent connections that provides a kind of internal memory .

Author:

Publisher:

ISBN: UIUC:30112071085010

Category: Evolutionary computation

Page:

View: 621

Categories: Evolutionary computation

Networking Self Teaching Guide

Networking Self Teaching Guide

L2TP (Layer 2 Tunneling Protocol), 592 labels in Domain Name Service, 204–
205 lack of spanning tree loops, 669 LACP (Link ... See Ethernet exercises for,
107–108 introduction to, 7, 10–11, ... LAPD (link access procedure D) channel,
335–336 LAT (local area transport), 579 LAUs (lobe access units), 80–81 Layer 1
.

Author: James Edwards

Publisher: John Wiley & Sons

ISBN: 9781119120223

Category: Computers

Page: 864

View: 513

IT professionals who want to move into the networking side in acorporate or enterprise setting will find the detailed content theyneed to get up to speed on the very latest networking technologies;plus, current networking professionals will find this a valuableand up-to-date resource. This hands-on guide is designed so thatyou can select, design, and implement an actual network using thetutorials and steps in the book. Coverage includes an overview ofnetworking technologies, including the hardware, software,transmission media, and data transfer processes; in-depth coverageof OSI and TCP/IP reference models; operating systems and othersystems software used in today?s networks; LANs, WANS, and MANs,including the components and standards that operate within eachtype of area network; and more.
Categories: Computers

Ambiguity in Language Learning Computational and Cognitive Models

Ambiguity in Language Learning Computational and Cognitive Models

In order to discriminate contextually conditioned verbs from other verbs , the child
has to learn conjunctions of a contextual feature and one or more
morphosemantic features . ... ( in the hidden unit block for context ) and a feature
detector t for “ verb of throwing ” ( in the hidden unit layer for morphosemantics ) .
... 11 This is what in fact happened in the model described here . ... It is virtually
impossible that two units will behave exactly the same way if their weights are
randomly selected .

Author: Hinrich Schütze

Publisher:

ISBN: STANFORD:36105010546856

Category:

Page: 356

View: 738

Categories:

Proceedings of the SICE Annual Conference

Proceedings of the     SICE Annual Conference

Input Layer ( 1st Layer ) Input Vector Output Layer ( 3rd Layer ) - Output Vector 21
12 IN With BP learning , the mean square ... 2 . 3 DLBP learning If we choose 11
> 112 , the learning of almost all hidden units progresses and the influence of ...

Author: Keisoku Jidō Seigyo Gakkai (Japan). Gakujutsu Kōenkai

Publisher:

ISBN: UOM:39015030267937

Category: Adaptive control systems

Page:

View: 609

Categories: Adaptive control systems

Electron Technology

Electron Technology

NO2 concentration hidden layer ) : the input layer consists of twelve units varied
in the range 10 ppb + 4 ppm , CO ... 1 + 20 ppm and the interference concentra -
in the sensor array ; the output layer has two units , equal tions were chosen to be
5 ... Netconvenient performance of the artificial neural net work Simulator
software and the different parameters work [ 11 ] . ... and the training of the neural
network change : ( number of units in hidden layer , activation function , learning
rate and ...

Author:

Publisher:

ISBN: STANFORD:36105028828163

Category: Electronics

Page:

View: 898

Categories: Electronics

The Official Proceedings of Speech Tech

The Official Proceedings of Speech Tech

A total error , E , over all patterns may be defined : 2 E ? Eit( 2 ) р The learning
algorithm minimises E w.r.t. ( Wij } by ... These layered networks are an extension
of the Perceptron networks introduced by Rosenblatt thirty years ago 16 ) .
However , Rosenblatt's perceptrons were limited to single layer of connections (
two layers of units , viz inputs and outputs ) . ... that of hidden Markov modelling in
a series of experiments on isolated spoken digit recognition from multiple
speakers ( 11 ] .

Author:

Publisher:

ISBN: PSU:000015651372

Category: Automatic speech recognition

Page:

View: 605

Categories: Automatic speech recognition

INTERSPEECH 2004 ICSLIP

INTERSPEECH 2004   ICSLIP

We designed the input and the output units of ANNs and the number of hidden
layer and the number of each node in each ... Two BP neural networks had been
trained to learn pitch and energy variations of the center phoneme from the 11 ...

Author:

Publisher:

ISBN: CORNELL:31924105304137

Category: Automatic speech recognition

Page:

View: 850

Categories: Automatic speech recognition

Physical Review

Physical Review

3-7 ( and graphically described by solid triangles ) connected to a two unit
trainable hidden layer ( denoted h ) , which in turn ... in addition , there are two
units of “ context ” neurons fully connected to the input layer and the output layer (
denoted c ) ... for how sequence - structure mappings are defined , and allowed
the more flexible context hidden units to both learn from ... layer networks . The
B1 network fully connects the input layer to eleven hidden units that are
connected to the two ...

Author:

Publisher:

ISBN: UOM:39076001525679

Category: Fluids

Page:

View: 404

Categories: Fluids

The International Journal of Neural Networks

The International Journal of Neural Networks

This section discusses the three major learning methods , and details two specific
learning rules which allow a network to adapt . ... Most learning rules are based
on a general theory of neural learning developed by Donald Hebb in the 1940s (
11 ) called Hebb's rule : If two units are ... units : The Delta Rule essentially
assigns credit or blame to the input elements according to their activation levels .

Author:

Publisher:

ISBN: PSU:000018890143

Category: Artificial intelligence

Page:

View: 710

Categories: Artificial intelligence