Can Machine Translation Become as Good as Human Translation?

Tejash Datta
2 min readAug 24, 2020

--

No, I don’t think so. Natural translation requires understanding what’s been said and then re-expressing that in the other language. The keyword being “understanding.” AI won’t be able to translate as well as humans to the extent that it doesn’t evolve from being specialized pattern recognition machines to becoming capable of actual thought and reasoning.

While it’s easy to foresee AI becoming able to perfectly translate sentences where the context and the relationship between the words is clear, any ambiguity could easily throw it off course. There can only be one possible correct interpretation. For example, consider the following 2 sentences:

The teacher punished the student because he was late.

The teacher punished the student because he was in a bad mood.

Despite the grammatical similarity in the construction of the two sentences, we intuitively know that ‘he’ refers to a different person in each case. Translating this requires an understanding of the relationship between teacher and student.

A workaround might be to translate phrase-for-phrase (preserving the pronoun in this case) but the result of this could be very unnatural since different languages express things very differently. Translating is less a converting process and more of an absorb and re-express process.

With common examples such as the one above, it may be possible to feign an understanding of the underlying relationship through raw statistical analysis of huge sample sets. However, there's no limit to the kind of relationships and context that could exist between people or objects. Consider another example:

When the iron ball hit the wooden table, it broke.

When the glass ball hit the metal table, it broke.

Comprehending this requires knowing the hardness of materials. Since there exist many types of materials and this example is pretty trite and rare, it’s infeasible to expect pattern recognition limited to language to compensate for a lack of reasoning and the broad knowledge it requires.

Elon Musk has said that AI keeps becoming able to solve more and more complicated problems because the degrees of freedom it can operate in is constantly increasing. I argue that solving technically ambiguous sentences that are simple for humans requires the kind of lateral thinking that isn't afforded by anything short of singularity where AI “thinking” eclipses human reasoning. At that point, there are probably going to be bigger problems being solved (or created) than translation.

--

--

Tejash Datta

Japanese learner (JLPT N2 in 1 year, 4 months). Developer. Find me on Instagram https://www.instagram.com/tejashdatta/