Verbal Ability – Odd One Out – Machine learning models are prone to
Slot – 3 – VA
Q. Five jumbled up sentences, related to a topic, are given below. Four of them can be
put together to form a coherent paragraph. Identify the odd one out and key in the
number of the sentence as your answer:
1. Machine learning models are prone to learning human-like biases from the
training data that feeds these algorithms.
2. Hate speech detection is part of the on-going effort against oppressive and
abusive language on social media.
3. The current automatic detection models miss out on something vital: context.
4. It uses complex algorithms to flag racist or violent speech faster and better than
human beings alone.
5. For instance, algorithms struggle to determine if group identifiers like “gay” or
“black” are used in offensive or prejudiced ways because they’re trained on
imbalanced datasets with unusually high rates of hate speech.
Answer: 3
Solutions:
Sentences 1, 2, 4 and 5 are about detecting hate speech through machine learning and its process. Sentence 3, however, talks about the context in which certain words are used in social media which the machine is not able to understand or detect.
Hence, the correct answer is sentence 3.
a) 1000+ Videos covering entire CAT syllabus b) 2 Live Classes (online) every week for doubt clarification c) Study Material & PDFs for practice and understanding d) 10 Mock Tests in the latest pattern e) Previous Year Questions solved on video