Google Introduces A New Method To Block Calendar Invitation Spam

Share post:

Google is finally making it easy for users to block unsolicited invitations.

The company’s new “Automatically add invitations” settings now only allow those that were previously accepted via email (RSVP’d), preventing the system from automatically adding all invitations to the default calendar.

“To help keep your Google Calendar free from spam, you can now select an option to display events on your calendar only if they come from a sender you know. If you select this option, you still get email event invitations from unknown senders, but they appear on your calendar only after you accept,” explained the Google Workspace team.

Unsolicited calendar invitations can be used by threat actors to target Google Calendar users. While invitation spam may be harmless to some people, these spam calendar invitations could be used by attackers to redirect targets to phishing landing sites via malicious URLs.

“As before, you can also choose to have all invitations appear on your calendar or only those you’ve accepted—letting you customize the display to best meet your needs. Additionally, admins can set the default reply option for their users in the Google Admin console. Note that end users can indicate their preference in their own Calendar settings,” Google added.

The sources for this piece include an article in BleepingComputer.

Featured Tech Jobs

SUBSCRIBE NOW

Related articles

Research Raises Concerns Over AI Impact on Code Quality

Recent findings from GitClear, a developer analytics firm, indicate that the increasing reliance on AI assistance in software...

Microsoft to train 100,000 Indian developers in AI

Microsoft has launched an ambitious program called "AI Odyssey" to train 100,000 Indian developers in artificial intelligence by...

NIST issues cybersecurity guide for AI developers

Paper identifies the types of cyberattacks that can manipulate the behavior of artificial intelligen

Canada, U.S. sign international guidelines for safe AI development

Eighteen countries, including Canada, the U.S. and the U.K., today agreed on recommended guidelines to developers in their nations for the secure design, development, deployment, and operation of artificial intelligent systems. It’s the latest in a series of voluntary guardrails that nations are urging their public and private sectors to follow for overseeing AI in

Become a member

New, Relevant Tech Stories. Our article selection is done by industry professionals. Our writers summarize them to give you the key takeaways