401. ______ is a possible mechanism for synaptic modification in the brain.
Hebbian Rule
McCulloch pits neuron
Hopfield network
None of above
Show me the answer
Answer: 1. Hebbian Rule
Explanation:
The Hebbian Rule is a theory that explains how neurons adapt and strengthen their connections based on activity.
It is often summarized as "cells that fire together, wire together."
402. The ______ can be used to train neural networks for pattern recognition.
Hebbian Rule
McCulloch pits neuron
Hopfield network
Genetic Algorithm
Show me the answer
Answer: 1. Hebbian Rule
Explanation:
The Hebbian Rule is used in training neural networks, particularly for tasks like pattern recognition.
It helps in strengthening the connections between neurons that are frequently activated together.
403. If two neurons on either side of a synapse are activated simultaneously, the strength of the synapse will increase is ______.
Hebbian Learning Rule
Perceptron Learning Rule
Delta Learning Rule
Genetic Algorithm Learning Rule
Show me the answer
Answer: 1. Hebbian Learning Rule
Explanation:
The Hebbian Learning Rule states that if two neurons are activated simultaneously, the synaptic connection between them strengthens.
This is the basis of learning in neural networks.
404. ______ is the one of the Neural Network Learning rules.
Hebbian Learning Rule
Perceptron Learning Rule
Delta Learning Rule
All of above
Show me the answer
Answer: 4. All of above
Explanation:
Hebbian Learning Rule, Perceptron Learning Rule, and Delta Learning Rule are all learning rules used in neural networks.
Each rule has its own mechanism for updating weights and improving the network's performance.
405. ______ is an error correcting the supervised learning algorithm of single layer feedforward networks with linear activation function.
Hebbian Learning Rule
Perceptron Learning Rule
Delta Learning Rule
None of above
Show me the answer
Answer: 2. Perceptron Learning Rule
Explanation:
The Perceptron Learning Rule is used in single-layer feedforward networks with linear activation functions.
It corrects errors by adjusting weights based on the difference between the predicted and actual outputs.
406. ______ also called Least Mean Square (LMS)
Hebbian Learning Rule
Perceptron Learning Rule
Delta Learning Rule
None of above
Show me the answer
Answer: 3. Delta Learning Rule
Explanation:
The Delta Learning Rule is also known as the Least Mean Square (LMS) algorithm.
It minimizes the error by adjusting weights based on the gradient of the error function.
407. A ______ is an algorithm for supervised learning of binary classifiers. This algorithm enables neurons to learn and processes elements in the training set one at a time.
Hebbian Learning
Delta Learning
Perceptron
Delta Learning
Show me the answer
Answer: 3. Perceptron
Explanation:
The Perceptron is a supervised learning algorithm used for binary classification.
It processes training examples one at a time and updates weights to minimize errors.
408. ______ are one of the types of perceptron.
Single layer perceptron
Multilayer perceptron
No layer perceptron
Both A and B
Show me the answer
Answer: 4. Both A and B
Explanation:
Single-layer perceptron and multilayer perceptron are two types of perceptrons.
Single-layer perceptrons are simpler, while multilayer perceptrons can model more complex relationships.
409. ______ can learn only linearly separable patterns.
Single layer perceptron
Multilayer perceptron
No layer perceptron
None of above
Show me the answer
Answer: 1. Single layer perceptron
Explanation:
A single-layer perceptron can only learn patterns that are linearly separable.
It cannot handle complex, non-linear relationships.
410. ______ can learn about two or more layers having a greater processing power.
Single layer perceptron
Multilayer perceptron
No layer perceptron
None of above
Show me the answer
Answer: 2. Multilayer perceptron
Explanation:
A multilayer perceptron has multiple layers of neurons, allowing it to learn complex, non-linear patterns.
It has greater processing power compared to single-layer perceptrons.
The figure represents a single-layer perceptron, which consists of input nodes, a summation function, and an activation function.
It is the simplest form of a neural network.
412. The given figure is of ______.
Input layer Hidden layers Output layer
Single layer perceptron
Multilayer perceptron
No layer perceptron
None of above
Show me the answer
Answer: 2. Multilayer perceptron
Explanation:
The figure represents a multilayer perceptron, which includes an input layer, one or more hidden layers, and an output layer.
It is capable of learning complex patterns.
413. Identify the type of perceptron activation function.
y=1+e−x1y=ex+e−xex−e−x
Sigmoid and tanh
Tanh and sigmoid
Linear, Relu
Relu, Linear
Show me the answer
Answer: 1. Sigmoid and tanh
Explanation:
The functions shown are the sigmoid and tanh activation functions.
Sigmoid outputs values between 0 and 1, while tanh outputs values between -1 and 1.
414. Identify the type of perceptron activation function.
y=max(0,x)y=max(αx,x)
Sigmoid and tanh
Tanh and sigmoid
Leaky Relu and Relu
Relu and, Leaky Relu
Show me the answer
Answer: 4. Relu and, Leaky Relu
Explanation:
The functions shown are the ReLU (Rectified Linear Unit) and Leaky ReLU activation functions.
ReLU outputs the input directly if it is positive, otherwise zero. Leaky ReLU allows a small, non-zero gradient for negative inputs.
415. Gradient descent is an optimization algorithm which is commonly-used to ______ machine learning models and neural networks.
Train
Test
Validate
None of above
Show me the answer
Answer: 1. Train
Explanation:
Gradient Descent is an optimization algorithm used to train machine learning models and neural networks.
It minimizes the loss function by iteratively adjusting the model's parameters.
416. ______ is one of the types of gradient descent algorithm.
Batch Gradient Descent (BGD)
Stochastic Gradient Descent (SGD)
Mini-Batch Gradient Descent
All of above
Show me the answer
Answer: 4. All of above
Explanation:
Batch Gradient Descent (BGD), Stochastic Gradient Descent (SGD), and Mini-Batch Gradient Descent are all variants of the gradient descent algorithm.
Each variant has its own approach to updating the model's parameters.
417. The backpropagation algorithm is used for which type of neural network?
Single-layer feedforward neural networks
Multilayer feedforward neural networks
Convolutional neural networks
Recurrent neural networks
Show me the answer
Answer: 2. Multilayer feedforward neural networks
Explanation:
The backpropagation algorithm is primarily used in multilayer feedforward neural networks.
It calculates the gradient of the loss function with respect to the weights and propagates the error backward to update the weights.
418. Which of the following is true about the backpropagation algorithm?
It is a supervised learning algorithm
It is an unsupervised learning algorithm
It is a reinforcement learning algorithm
It is a semi-supervised learning algorithm
Show me the answer
Answer: 1. It is a supervised learning algorithm
Explanation:
The backpropagation algorithm is a supervised learning algorithm.
It requires labeled data to calculate the error and update the model's weights.
419. The backpropagation algorithm involves two phases. What are they?
Forward propagation and backward propagation
Feature selection and feature extraction
Clustering and classification
Regression and classification
Show me the answer
Answer: 1. Forward propagation and backward propagation
Explanation:
The backpropagation algorithm consists of two phases:
Forward propagation: Input data is passed through the network to compute the output.
Backward propagation: The error is calculated and propagated backward to update the weights.
420. Which of the following is the activation function commonly used in the backpropagation algorithm?
Linear
Sigmoid
ReLU
Tanh
Show me the answer
Answer: 2. Sigmoid
Explanation:
The sigmoid activation function is commonly used in the backpropagation algorithm.
It is differentiable, which is essential for calculating gradients during backpropagation.
421. The backpropagation law is also known as generalized delta rule, is it true?
Yes
No
Partially yes
Not sure
Show me the answer
Answer: 1. Yes
Explanation:
The backpropagation law is also known as the generalized delta rule.
It is used to update the weights of a neural network based on the error gradient.
422. ______ consists of a set of neurons where each neuron corresponds to a pixel of the difference image and is connected to all the neurons in the neighborhood.
Hopfield neural network
Biological neural network
Hamming neural network
McColloch Pits' neural network
Show me the answer
Answer: 1. Hopfield neural network
Explanation:
The Hopfield neural network consists of neurons that are fully connected to each other.
Each neuron corresponds to a pixel in the input image, and the network is used for tasks like pattern recognition and optimization.
423. The ______ is commonly used for auto-association and optimization tasks.
Hopfield neural network
Biological neural network
Hamming neural network
McColloch Pits' neural network
Show me the answer
Answer: 1. Hopfield neural network
Explanation:
The Hopfield neural network is commonly used for auto-association and optimization tasks.
It can store and retrieve patterns, making it useful for memory-based applications.
424. In ______, the input and output patterns are discrete vector, which can be either binary 0,1 or bipolar +1, -1 in nature.
Continuous Hopfield n/w
Discrete Hopfield n/w
Sequential Hopfield n/w
None of above
Show me the answer
Answer: 2. Discrete Hopfield n/w
Explanation:
In a Discrete Hopfield Network, the input and output patterns are discrete vectors, typically binary (0, 1) or bipolar (+1, -1).
It is used for tasks like pattern recognition and associative memory.
425. Continuous Hopfield Network in comparison with Discrete Hopfield network, continuous network has ______ as a continuous variable.
Space
Range
Time
Velocity
Show me the answer
Answer: 3. Time
Explanation:
In a Continuous Hopfield Network, time is treated as a continuous variable.
This allows the network to model dynamic systems and solve optimization problems more effectively.
426. ______ architecture can be build up by adding electrical components such as amplifiers which can map the input voltage to the output voltage over a sigmoid activation function.
Continuous Hopfield n/w
Discrete Hopfield n/w
Sequential Hopfield n/w
None of above
Show me the answer
Answer: 1. Continuous Hopfield n/w
Explanation:
The Continuous Hopfield Network can be built using electrical components like amplifiers.
These components map the input voltage to the output voltage using a sigmoid activation function.
427. ______, is a network having a single linear unit.
Madeline
Adaline
Backpropagation
Perceptron
Show me the answer
Answer: 2. Adaline
Explanation:
Adaline (Adaptive Linear Neuron) is a neural network with a single linear unit.
It uses the Delta Learning Rule for training and is used for binary classification tasks.
428. The basic structure of ______ is similar to perceptron having an extra feedback loop.
Madeline
Adaline
Backpropagation
None of above
Show me the answer
Answer: 2. Adaline
Explanation:
The Adaline network has a structure similar to the perceptron but includes an extra feedback loop.
This feedback loop allows it to adjust weights based on the error between the predicted and actual outputs.
429. Identify the diagram and answer the question which training algorithm is this?
A. Madeline B. Adaline
C. Perceptron D. Backpropagation
Show me the answer
Answer: B. Adaline
Explanation:
The diagram represents the Adaline training algorithm.
Adaline uses the Delta Learning Rule to minimize the error between the predicted and actual outputs.
430. ______ is a network which consists of many Adalines in parallel.
Madeline
Adaline
Backpropagation
Perceptron
Show me the answer
Answer: 1. Madeline
Explanation:
Madeline (Multiple Adaline) is a network that consists of multiple Adaline units working in parallel.
It is used for more complex tasks that require multiple outputs.
431. In NN, Delta rule works only for the ______.
Hidden layer
Input layer
Weight and bias
Output layer
Show me the answer
Answer: 4. Output layer
Explanation:
The Delta Rule works by adjusting the weights based on the error at the output layer.
It is used to minimize the difference between the predicted and actual outputs.
432. Generalized delta rule, also called as ______ rule, is a way of creating the desired values of the hidden layer.
Feed-forward
Backpropagation
Perceptron
Adaline
Show me the answer
Answer: 2. Backpropagation
Explanation:
The Generalized Delta Rule is also known as the Backpropagation Rule.
It is used to calculate the error gradients for the hidden layers in a neural network, enabling the network to learn complex patterns.