0

I am writing a web app that manages volunteer events for political campaigns. It will optionally allow volunteers to create events. In the case of an event that is entered by a bad actor, or even a supporter that titles an event with a phrase that does not reflect well on the campaign, I want to mark the created event as private and needs review.

Is there a way to get semantic analysis of the text, where it grades that analysis on two scales:

  • Democrat - Republican
  • OK - Concerning

So "Help Joe Biden" would register as strong Democratic, "Help Donald Trump" would register as strong Republican. And "Let's take on the evil Wall St." would register as concerning while "Motherhood and Apple Pie" would register as strong OK. (A campaign may very well be good with painting Wall St. as evil, but that's one that should be reviewed.)

I know that if I have campaign managers score created events for 6 months, I then have the data needed to train a model to do this. But that means letting bad stuff go public for 6 months. And having problematic content in the middle of campaign season - that would be bad.

So, is there a solution for this? I can find lots of semantic analysis that measures positive/negative, which is great for marketing, etc. But this is a much harder problem.

0 Answers0