The ubiquity of e-commerce has attracted much fakery — both on the part of sellers and users. Faked reviews using techniques such as "opinion spamming," "shilling," and "astroturfing" represent part of a much larger and still growing worldwide trend. One example is how people have come to consider reviews of travel sites and experiences as often not real. Similarly, people should be skeptical of many seller claims, particularly where these involve promises of better health and wealth.
Previous fraud stories have covered faked user reviews, fraudulent scientific research, and the scamming of authors who pay to have their work published in fake scientific journals. Some of the relevant advice is worth restating here. Meanwhile, the FTC is making strides against this kind of fraud and sending an appropriate message to its perpetrators. But what more can be done?
E-commerce companies such as Amazon have established quality standards — particularly when health and wealth-making claims are made by sellers. In this case, Cure Encapsulations violated Amazon's rules about promotional content. E-commerce sites also may demand and review verifiable supporting evidence. But, where such evidence is not forthcoming or sufficiently definitive, sellers' advertisements and offerings should be required to include a clear and suitable disclaimer that the product or service has not been independently verified.
Setting a stricter bar to require this kind of disclaimer would further discourage fraudulent claims and reviews. Signed disclosure agreements should be mandatory and should include identifying relationships among vendors, reviewers, and entities such as products and stores. This is particularly necessary where there may be compensation, either in kind or financially. Companies should conduct regular spot checks and audits of both disclosure agreements and disclaimers.
To look for this type of fraud, companies that host sellers, and their internal auditors, need to use statistical and artificial intelligence-based fraud-detection methodologies. Quantitative, web-based data mining such as pattern discovery and relational modeling can be particularly effective at finding red flags, including:
- Reviewer behaviors that should be further scrutinized. Public data available from websites can be data-mined, including user profile/reviewer IDs, time of posting, frequency of posting, instances of first reviewers of products, and posting of the same or similar reviews at other locations of the same company. For example, a username that has more than three numbers at the end could indicate an automated program is at work.
Also, search website private/internal data, such as internet protocol and media access control addresses, time taken to post a review, the number of reviewers who created accounts around the same time — including at the time a domain name was registered — and physical location of the reviewer. Follow up on any behavioral red flags detected.
- The content of reviews. This includes obvious content and style similarities among reviews by different reviewers, and copying and pasting reviews by other reviewers. Patterns in the use of overly positive, and negative, language or marketing jargon normally not used by most people also can be signs of made-up reviews. Finally, look for unique phrasings such as word n-grams and part-of-speech n-grams — contiguous sequences of n items from a given sample of text or speech — which can be searched via data mining.