Microsoft has just retired an AI tool that was not only able to recognize who you are but also read your emotions. Azure Face is an emotion recognition tool that does a great job of picking up on this. However, experts argue that this is not ethical as it violates human rights.
Microsoft just released an updated version of their Responsible AI Standard. They want AI to be a positive force in the world and recognize Azure Face as a potential way for that to happen.
Although the public won’t be able to access it, AI facial recognition seems to still have a future. Microsoft recognizes the value of giving people limited access. Especially for people with impaired vision.
One major cut was that the AI could not identify a person based on things like hair style, age, and facial expression. It’s understandable to see why that would be concerning in terms of privacy & safety.
On top of the new Azure Face, the company’s now limiting which businesses can access its Custom Neural Voice service. It’s a great tool for giving your text that ‘lifelike’ quality.
Related News: Microsoft has introduced a new way to keep people safe from malicious software.
It’s also adding new features to its email service in Microsoft 365, that improve how something called Tenant Allow Block List works.
Previously, this was a feature that allowed people to block contacts. If a blocked contact tried to email you, the email wouldn’t reach you.
Now, Microsoft is previewing an additional control which also allows you to stop emails being sent to these blocked contacts, too.
It means the threat of being caught out by a phishing scam is reduced, giving you another layer of security as part and parcel of your Microsoft 365 subscription.
With phishing scams becoming increasingly more dangerous, it’s not a moment too soon in our view.
The feature should go into preview soon, and is expected to be available by the end of the month. In the meantime, if you’re concerned about your business’s email security, get in touch.