UNIT 6 MCQ PART 1 NEURAL NETWORK
UNIT -6MCQ
A neural network is designed to:
A) Work like a traditional database
B) Mimic the human brain by processing information through nodes
C) Only store images
D) Replace humans completely-
Neural networks can:
A) Identify patterns
B) Weigh choices
C) Make decisions
D) All of the above -
One advantage of neural networks is:
A) They require explicit programming for feature extraction
B) They automatically extract important features from data
C) They can only process structured data
D) They cannot handle messy data -
Neural networks are widely used in:
A) Chatbots
B) Spam filtering
C) Image tagging
D) All of the above -
Personalized recommendations in e-commerce are possible because of:
A) Small Data
B) Neural networks
C) Only structured databases
D) Manual calculations -
Neural networks help search engines like Google by:
A) Storing websites
B) Improving ranking through pattern recognition
C) Deleting irrelevant data
D) Reducing server space -
Neural networks consist of:
A) Only input and output layers
B) Layers of interconnected nodes: input, hidden, and output layers
C) Only hidden layers
D) Random disconnected nodes -
The human brain analogy in neural networks refers to:
A) Nodes acting like neurons
B) Using physical brain parts
C) Processing only numbers
D) Data storage in memory chips -
The main purpose of a neural network is to:
A) Store huge data
B) Identify patterns and make predictions
C) Replace all programming
D) Reduce dataset size -
Neural networks are particularly useful for:
A) Simple arithmetic
B) Complex patterns and decision-making
C) Deleting small files
D) Storing spreadsheets only
-
The input layer of a neural network:
A) Generates the final prediction
B) Contains units representing input features
C) Does all the calculations
D) Updates weights -
Hidden layers:
A) Are optional and do not learn patterns
B) Learn patterns from data and perform calculations
C) Only exist in output layers
D) Store the final output -
Output layer:
A) Takes input features
B) Generates final predictions or results
C) Adjusts weights of hidden layers
D) Deletes irrelevant data -
Each connection between neurons has a:
A) Bias only
B) Weight showing its importance
C) Fixed value of 1
D) Only input value -
If a neuron’s input is big enough, it:
A) Fires and passes information to the next layer
B) Shuts down
C) Deletes itself
D) Sends information backward automatically -
Nodes in a neural network:
A) Only store data
B) Perform calculations and produce output
C) Randomly activate
D) Only exist in input layers -
A deep neural network has:
A) No hidden layers
B) Two or more hidden layers
C) Only one output node
D) No input layer -
Training deep networks is called:
A) Shallow learning
B) Deep learning
C) Weight initialization
D) Feature extraction -
Bias in a neuron helps:
A) Reduce the number of layers
B) Adjust activation function for better accuracy
C) Delete irrelevant inputs
D) Only store data -
Activation functions:
A) Decide whether a neuron activates or not
B) Store weights
C) Delete unnecessary layers
D) Only exist in input nodes -
Examples of activation functions include:
A) Sigmoid, Tanh, ReLU
B) Linear regression
C) Mean and Median
D) None of the above -
Learning rules in neural networks guide:
A) How weights and biases are updated
B) How data is deleted
C) How to store spreadsheets
D) How to build input layers only -
The most common learning rule is:
A) Forward propagation
B) Backpropagation
C) Weight deletion
D) Bias calculation only -
Forward propagation is:
A) Error correction process
B) Passing input → hidden → output to make predictions
C) Only adjusting biases
D) Deleting irrelevant inputs -
Backpropagation is used to:
A) Pass data forward
B) Fix errors by updating weights
C) Store input features
D) Create activation functions
-
Each neuron multiplies input by:
A) Bias
B) Weights
C) Activation function
D) Output -
Total input in a neuron is:
A) Only the input numbers
B) Weighted sum of inputs plus bias
C) Only the activation function
D) Only output -
Activation function decides:
A) Whether output = 1 (activate) or 0 (don’t activate)
B) How weights are stored
C) Only forward propagation
D) Input features -
Feedforward means:
A) Passing information backward
B) Step-by-step passing of information from input → hidden → output
C) Only updating weights
D) Storing bias values -
Information in a feedforward network moves:
A) Forward only, no loops
B) Backward only
C) Randomly
D) In loops between layers -
Output of one neuron becomes:
A) Input for the next neuron
B) Stored as bias
C) Only for activation calculation
D) Deleted after one use -
Threshold logic in perceptrons gives output as:
A) Any real number
B) 0 or 1
C) Negative only
D) Only positive decimals -
Perceptrons were created in:
A) 1958
B) 1965
C) 2000
D) 1990 -
The simplest neural network is called:
A) Deep Neural Network
B) Perceptron
C) CNN
D) RNN -
Perceptrons use:
A) ReLU activation only
B) Threshold Logic Units (TLUs)
C) Bias only
D) Convolution filters -
Feedforward Neural Network (FFNN) is also called:
A) Convolutional Network
B) Multi-Layer Perceptron (MLP)
C) Recurrent Network
D) GAN -
FFNN moves data:
A) Only forward from input → hidden → output
B) Forward and backward simultaneously
C) Randomly
D) Backward only -
FFNN works well with:
A) Small datasets only
B) Noisy or messy data
C) Text only
D) Images only -
Convolutional Neural Networks (CNNs) are mainly used for:
A) Sequential data
B) Image and video processing
C) Text prediction only
D) Spam detection -
CNNs use:
A) Loops to remember previous data
B) Filters to detect edges, shapes, and textures
C) Only linear regression
D) Threshold Logic Units
Comments
Post a Comment