Good vs. Bad Automated Content? It's In the Context Layer

8.28.17

The difference between good automated content and bad automated content can be boiled down to the number of scenarios the programmer creates to turn ordinary data into beautiful prose.

Data variability, which is predicated upon the number and the depth of insights driven by changes in the data, is the key quality driver in Natural Language Generation (NLG). And to do NLG data variability right, you have to create a lot of scenarios.

NLG creators must always be asking: How vast is the universe of outcomes that the engine takes into account when creating a narrative?

In other words: How many ways can you say something?

It's not a coincidence that this is the same approach used when developing NLG's reverse twin, Natural Language Processing (NLP).

Words from Data meets Data from Words

People get touchy when you confuse NLG and NLP, especially those people who do either for a living (which is not a lot of people, but they still get touchy). The truth is that there is a lot of commonality between NLG and NLP. The core concept is the same: Understand the input and translate to the output.

While NLP takes in words and translates those words to data, NLG takes in data and translates that data to words. But creating words isn't the hard part of NLG. In fact, we've reached the point where machines can create complex sentences without too much trouble. In its simplest form, creating words from data is a binary proposition:

read the rest at: https://automatedinsights.com/blog/good-vs-bad-automated-content-its-in-the-context-layer