Learn about how bots affect community engagement and the steps PublicInput is taking to protect you and your data.
What are bots?
Bots are programs designed to go through sites on the internet and perform a specific function. Essentially, bots can help to automate certain tasks that would otherwise require lots of manpower. Every day, millions of these programs “crawl” through the internet. In fact, it’s estimated that in 2021 bots accounted for over 40% of daily internet traffic.
Are there good bots?
At the core, there is no such thing as a “good” or “bad” bot. Bots are just made up of code. How they are used is up to their creators.
Some bots perform helpful functions. If you’ve ever received auto-prompted assistance from the PublicInput website help box, then you’ve interacted with a bot– one that helps us direct customer support requests to the appropriate person.
What about “bad” bots?
While there are plenty of helpful bots on the internet, many bots are used for more nefarious purposes. Bots can do all sorts of things, from creating fake social media accounts to stealing or leaking private data. These bots act like “real” people, which allows them to do things like sign up for emails, fill out forms, create comments on a website, and more– spamming servers with false engagement.
What does this have to do with public engagement?
When bots are crawling through the internet, they can come across public engagement surveys and input realistic responses in an attempt to fool the system into thinking they’re real people. Without a good way of filtering out this fake engagement data, you run the risk of potentially letting these bots overtake the voices of actual community members.
Worse, these bots could even be intentionally sent to your site to skew data towards one result over another or divide your community. A recent example of bots swaying public opinion occurred during the 2016 U.S. presidential election. In fact, a study found that nearly 20% of all election-related discourse on Twitter the day before the election were actually bots.
How can we stop bots from influencing engagement?
In the past, PublicInput used IP addresses as a way to determine the legitimacy of a participant. But like other technologies, bots are ever changing. In the last year, there’s been an increase in the number of tools that allow bots to use legitimate residential IP addresses.
A solution that you may be familiar with is CAPTCHA or reCAPTCHA. CAPTCHA stands for “Completed Automated Public Turing test to tell Computers and Humans Apart.” These tests commonly thwart bots by asking the user to identify objects or decipher a distorted image. For example, a reCAPTCHA might prompt users to identify an object within a series of images, like a bus or a bicycle. But this popup can disrupt the user experience, meaning less people will participate.
What is PublicInput’s solution to bot detection?
We’ve developed the PublicInput BotShield to protect your engagement data. We’re using a combination of a few different technologies, including CloudFlare’s bot detection and Google’s 3rd generation reCAPTCHA service.
With reCAPTCHA v3, any user that scores less than .7 will be automatically flagged as a bot. If there’s bot activity, you can use a link on your participants tab to view a list of accounts flagged as bots. These bot responses will no longer show up in reports, tables, results, or exports of your data. If you find that a human user has been flagged by mistake, you can restore their responses in the participants tab on your engagement hub.
The combination of CloudFlare and reCAPTCHA v3 will catch bots while minimizing the amount of disruption to the user experience, creating the best of both worlds for your community engagement experience. These features are available immediately for all of our customers on the Engage or Engage Plus plans.
Watch this video for a quick overview of bots and how PublicInput’s BotShield delivers integrity.