UNIT -3 MCQ PART 1 HOW CAN MACHINES SEE?

UNIT3 

(MCQ)

    1. How can machines "see"?
      A. Using eyes
      B. With the help of Computer Vision 
      C. Using microphones
      D. Using keyboards

    2. Which of the following is used in Computer Vision to capture images?
      A. Cameras 
      B. Sensors only
      C. Microphones
      D. Laptops

    3. Which component helps analyze images in Computer Vision?
      A. Deep Learning Models 
      B. Solar Panels
      C. Wheels
      D. Sensors only

    4. Which of these is a task of Computer Vision?
      A. Inspecting products for defects 
      B. Writing code
      C. Listening to music
      D. Playing games

    5. Computer Vision allows AI to:
      A. Recognize objects 
      B. Only store images
      C. Convert images to text without analysis
      D. None of the above

    6. Which of these is a limitation of Computer Vision?
      A. AI can understand images perfectly
      B. Poor image quality can cause errors 
      C. AI never needs data
      D. Images don’t need preprocessing

    7. Computer Vision is most useful in:
      A. Cooking
      B. Image recognition 
      C. Writing essays
      D. Listening to audio

    8. Which AI model is commonly used in Computer Vision?
      A. Convolutional Neural Networks (CNN) 
      B. Linear Regression
      C. Decision Trees
      D. K-Means

    9. Computer Vision is better than humans at:
      A. Smelling
      B. Analyzing thousands of images quickly 
      C. Feeling emotions
      D. Listening to sounds

    10. Digital images in computers are made of:
      A. Text
      B. Pixels 
      C. Audio
      D. Shapes

    1. Each pixel in a digital image stores:
      A. A number representing color 
      B. A sound
      C. A word
      D. A video

    2. In a grayscale image, a pixel value of 0 represents:
      A. White
      B. Black 
      C. Gray
      D. Red

    3. In a grayscale image, a pixel value of 255 represents:
      A. Black
      B. White 
      C. Gray
      D. Blue

    4. Which values represent shades of gray between black and white?
      A. 0–255 
      B. 1–100
      C. 100–200
      D. 0–1000

    5. RGB in color images stands for:
      A. Red, Green, Blue 
      B. Random Green Blue
      C. Red Gray Black
      D. None of the above

    6. More pixels in an image lead to:
      A. Higher resolution 
      B. Lower resolution
      C. Black and white images
      D. Blurry images

    7. Fewer pixels in an image make it:
      A. Clear
      B. Blurry or pixelated 
      C. Larger
      D. Colorful

    8. A digital image can be:
      A. Structured only
      B. Structured, semi-structured, or unstructured 
      C. Only unstructured
      D. Only text

    9. Which is NOT a use of pixel values?
      A. Building the image
      B. Storing color
      C. Playing music 
      D. Representing brightness

    10. Pixels in color images use:
      A. One number per pixel
      B. Two numbers per pixel
      C. Three numbers per pixel (RGB) 
      D. Four numbers per pixel

    1. Image acquisition is the process of:
      A. Cleaning images
      B. Capturing images or videos 
      C. Segmenting images
      D. High-level processing

    2. Which device is NOT used for image acquisition?
      A. Digital camera
      B. Scanner
      C. Microphone 
      D. Design software

    3. High resolution in cameras:
      A. Captures finer details 
      B. Reduces clarity
      C. Produces black and white images
      D. Destroys pixels

    4. In medicine, which device captures detailed internal images?
      A. Camera
      B. MRI 
      C. Scanner only
      D. Microphone

    5. Lighting affects image acquisition because:
      A. Dark or bright images may affect AI analysis 
      B. It doesn’t matter
      C. Only changes color of text
      D. Only affects audio

    6. Angles during capture are important because:
      A. They change AI algorithms
      B. They affect clarity and object visibility 
      C. Only change pixel size
      D. None of the above

    7. Image acquisition is the first step in:
      A. Computer Vision 
      B. Deep Learning only
      C. Text Recognition
      D. Data Cleaning

    8. A digital video can be considered:
      A. Multiple images 
      B. A single pixel
      C. Only audio
      D. A text file

    9. The quality of captured images affects:
      A. AI’s ability to understand images 
      B. Only storage size
      C. Only color format
      D. Only number of pixels

    10. Special devices like CT scans are used in:
      A. Medicine 
      B. Agriculture only
      C. Finance
      D. Gaming

    1. Preprocessing in Computer Vision means:
      A. Capturing images
      B. Cleaning and improving images 
      C. Recognizing objects
      D. Detecting multiple objects

    2. Noise reduction:
      A. Adds colors to images
      B. Removes blurry spots and distractions 
      C. Increases pixels
      D. Crops images

    3. Normalization adjusts:
      A. Image size
      B. Brightness and contrast 
      C. Object detection
      D. Bounding boxes

    4. Resizing images is important to:
      A. Make them same size for analysis 
      B. Increase noise
      C. Change pixel color
      D. Make images blurry

    5. Histogram equalization helps to:
      A. Adjust dark and bright areas 
      B. Remove objects
      C. Convert image to audio
      D. Increase bounding boxes

    6. Preprocessing is required because:
      A. AI cannot analyze raw images well 
      B. Humans don’t see images
      C. AI needs sound
      D. Only for text

    7. Cropping images is used to:
      A. Focus on relevant areas 
      B. Change colors
      C. Resize pixels only
      D. Remove noise only

    8. Preprocessing makes images:
      A. Dirty
      B. AI-ready 
      C. Black and white only
      D. Audio-ready

    9. Normalization makes images:
      A. Uniform in brightness and contrast 
      B. Uniform in size only
      C. Blurry
      D. Noisy

    10. Noise in images refers to:
      A. Sounds
      B. Blurry or unwanted parts 
      C. Pixels only
      D. Color balance

    1. Feature extraction is:
      A. Capturing images
      B. Finding important details in images 
      C. High-level processing
      D. Detection

    2. Edge detection is used for:
      A. Identifying object outlines 
      B. Adding colors
      C. Cropping
      D. Brightness adjustment

    3. Corner detection helps to:
      A. Find edges only
      B. Spot sharp bends in shapes 
      C. Adjust pixels
      D. Segment objects

    4. Texture analysis checks for:
      A. Patterns like roughness or smoothness 
      B. Pixel values only
      C. Bounding boxes
      D. Cropping images

    5. Color-based features are used to:
      A. Separate objects by color 
      B. Detect corners
      C. Noise reduction
      D. Resize images

    6. Manual feature selection is replaced by:
      A. CNNs (Convolutional Neural Networks) 
      B. OCR
      C. RGB scaling
      D. KNN

    7. Deep learning in feature extraction:
      A. Learns important features automatically 
      B. Removes objects
      C. Converts image to grayscale
      D. Crops images

    8. Features are important because they:
      A. Reduce image size
      B. Help AI recognize objects 
      C. Only detect color
      D. Increase noise

    9. Which is NOT a feature extraction method?
      A. Edge detection
      B. Histogram Equalization 
      C. Corner detection
      D. Texture analysis

    10. Feature extraction comes after:
      A. Preprocessing 
      B. Detection
      C. Segmentation
      D. High-level processing


Comments

Popular posts from this blog

XII UNIT 3 HOW CAN MACHINES SEE?

XII UNIT 2 Data Science Methodology: An Analytic Approach to Capstone Project