[ad_1]
Microsoft AI researchers accidentally exposed tens of terabytes (TB) of sensitive data but the company has now fixed the mistake which led to the incident, the company announced. The private data that was exposed include private keys and passwords, and it happened while the company was publishing a storage bucket of open-source training data on GitHub.
The GitHub repository that belonged to Microsoft’s AI research division was spotted by cloud security startup Wiz and it shared its findings with Microsoft on June 22. Microsoft fixed it on June 24.
What data was exposed?
The exposed data included 38TB of sensitive information, including the personal backups of two Microsoft employees’ personal computers. The data also contained other sensitive personal data, including passwords to Microsoft services, secret keys, and over 30,000 internal Microsoft Teams messages from hundreds of Microsoft employees.
The URL, which had exposed this data since 2020, was also misconfigured to allow “full control” rather than “read-only” permissions, Wiz said. This means that Microsoft accidentally gave permissions to delete, replace the content and even potentially inject malicious content into them.
“AI unlocks huge potential for tech companies,” Wiz co-founder and CTO Ami Luttwak was quoted by TechCrunch.
“However, as data scientists and engineers race to bring new AI solutions to production, the massive amounts of data they handle require additional security checks and safeguards. With many development teams needing to manipulate massive amounts of data, share it with their peers or collaborate on public open-source projects, cases like Microsoft’s are increasingly hard to monitor and avoid,” Luttwak added.
Here’s what Microsoft has to say
In a blog post, Microsoft’s Security Response Center said that “no customer data was exposed.”
“The information that was exposed consisted of information unique to two former Microsoft employees and these former employees’ workstations. No customer data was exposed, and no other Microsoft services were put at risk because of this issue,” the company said.
It added that customers do not need to take any additional action to remain secure.
The GitHub repository that belonged to Microsoft’s AI research division was spotted by cloud security startup Wiz and it shared its findings with Microsoft on June 22. Microsoft fixed it on June 24.
What data was exposed?
The exposed data included 38TB of sensitive information, including the personal backups of two Microsoft employees’ personal computers. The data also contained other sensitive personal data, including passwords to Microsoft services, secret keys, and over 30,000 internal Microsoft Teams messages from hundreds of Microsoft employees.
The URL, which had exposed this data since 2020, was also misconfigured to allow “full control” rather than “read-only” permissions, Wiz said. This means that Microsoft accidentally gave permissions to delete, replace the content and even potentially inject malicious content into them.
“AI unlocks huge potential for tech companies,” Wiz co-founder and CTO Ami Luttwak was quoted by TechCrunch.
“However, as data scientists and engineers race to bring new AI solutions to production, the massive amounts of data they handle require additional security checks and safeguards. With many development teams needing to manipulate massive amounts of data, share it with their peers or collaborate on public open-source projects, cases like Microsoft’s are increasingly hard to monitor and avoid,” Luttwak added.
Here’s what Microsoft has to say
In a blog post, Microsoft’s Security Response Center said that “no customer data was exposed.”
“The information that was exposed consisted of information unique to two former Microsoft employees and these former employees’ workstations. No customer data was exposed, and no other Microsoft services were put at risk because of this issue,” the company said.
It added that customers do not need to take any additional action to remain secure.
[ad_2]
Source link