1010.team – is your daily personal feed about IT, new technologies, internet business, startups, cryptocurrency, cybersecurity.
We publish information only from trusted sources.
Be aware of the latest IT news with us.
Turns out Elon Musk’s FDA prediction was only off by about a month. After reportedly denying the company’s overtures in March, the FDA approved Neuralink’s application to begin human trials of its prototype Link brain-computer interface (BCI) on Thursday.
Founded in 2016, Neuralink aims to commercialize BCIs in wide-ranging medical and therapeutic applications — from stroke and spinal cord injury (SCI) rehabilitation, to neural prosthetic controls, to the capacity “to rewind memories or download them into robots,” Neuralink CEO Elon Musk promised in 2020. BCIs essentially translate the analog electrical impulses of your brain (monitoring it using hair-thin electrodes delicately threaded into that grey matter) into the digital 1’s and 0’s that computers understand. Since that BCI needs to be surgically installed in a patient’s noggin, the FDA — which regulates such technologies — requires that companies conduct rigorous safety testing before giving its approval for commercial use.
In March, the FDA rejected Neuralink’s application to begin human trials reportedly in part due to all the test animals that kept dying after having the prototype BCI implanted. According to internal documents acquired by Reuters in December, more than 1,500 animals had been killed in the development of the Neuralink BCI since 2018. The US Department of Agriculture’s (USDA) Inspector General has since launched an investigation into those allegations.
The FDA’s reticence was also born from concerns about the design and function of the interface when implanted in humans. “The agency’s major safety concerns involved the device’s lithium battery; the potential for the implant’s tiny wires to migrate to other areas of the brain; and questions over whether and how the device can be removed without damaging brain tissue,” current and former Neuralink employees told Reuters in March.
While Neuralink has obtained FDA approval to begin its study, the company is not yet seeking volunteers. This is the result of incredible work by the Neuralink team in close collaboration with the FDA and represents an important first step that will one day allow our technology to help many people,” Neuralink Tweeted on Thursday. “Recruitment is not yet open for our clinical trial.”
Update, 05/26/23, 11:28 AM ET: This story has been updated to include a response from Physicians Committee for Responsible Medicine, the animal welfare advocacy group that previously uncovered Neuralink’s animal deaths.
On May 25, 2023, Elon Musk’s brain-computer interface company Neuralink shared via Twitter that it had received approval from the FDA to begin human clinical trials. It is important to remember that such FDA approval is not an acquittal of Neuralink’s well-documented track record of animal cruelty and sloppy scientific studies. The approval is also not a guarantee that a Neuralink device will someday be commercially available as a significant number of medical devices that begin clinical trials never reach the market. In addition, Neuralink will likely continue to conduct experiments on monkeys, pigs, and other animals even after clinical trials have begun. Past animal experiments revealed serious safety concerns stemming from the product’s invasiveness and rushed, sloppy actions by company employees. As such, the public should continue to be skeptical of the safety and functionality of any device produced by Neuralink.
The Physicians Committee continues to urge Elon Musk and Neuralink to shift to developing a noninvasive brain-computer interface. Researchers elsewhere have already made progress to improve patient health using such noninvasive methods, which do not come with the risk of surgical complications, infections, or additional operations to repair malfunctioning implants. Noninvasive devices are already demonstrating the ability to improve quality of life for older adults and elderly patients, translate brain activity into intelligible speech, and assist paralyzed patients.
This article originally appeared on Engadget at https://www.engadget.com/neuralink-receives-fda-clearance-to-begin-human-trials-of-its-brain-computer-interface-001504243.html?src=rss
Last February, the Biden administration unveiled its $5 billion plan to expand EV charging infrastructure across the country. Not only with the Department of Transportation help states build half a million EV charging stations by 2030, the White House also convinced Tesla to share a portion of its existing Supercharger network with non-Tesla EVs. On Thursday, Ford became the first automaker to formalize that pact with Tesla, announcing during a Twitter Spaces event that “Ford electric vehicle customers access to more than 12,000 Tesla Superchargers across the U.S. and Canada,” starting in Spring 2024, per the company release.
Because Teslas uses a proprietary charger port design for its vehicles, Ford owners will initially need to rely on a Tesla-developed adapter connected to the public charging cable in order to replenish their Ford F-150 Lightning, Mustang Mach-E and E-Transit vehicles. Ford also announced that, beginning with the 2025 model year, it will switch from the existing Combined Charging System (CCS) port to Tesla’s now open-source NACS charge port. These 12,000 additional chargers will join Ford’s 84,000-strong Blue Oval charging station network.
“Tesla has led the industry in creating a large, reliable and efficient charging system and we are pleased to be able to join forces in a way that benefits customers and overall EV adoption,” Marin Gjaja, chief customer officer of Ford Model e, said in the release. “The Tesla Supercharger network has excellent reliability and the NACS plug is smaller and lighter. Overall, this provides a superior experience for customers.”
This article originally appeared on Engadget at https://www.engadget.com/ford-ev-drivers-will-get-access-to-12000-north-american-tesla-superchargers-next-spring-221752191.html?src=rss
The White House has made responsible AI development a focus of this administration in recent months, releasing a Blueprint AI Bill of Rights, developing a risk management framework, committing $140 million to found seven new National Academies dedicated to AI research and weighing in on how private enterprises are leveraging the technology. On Tuesday, the executive branch announced its next steps towards that goal including releasing an update to the National AI R&D Strategic Plan for the first time since 2019 as well as issuing a request for public input on critical AI issues. The Department of Education also dropped its hotly-anticipated report on the effects and risks of AI for students.
The OSTP’s National AI R&D Strategic Plan, which guides the federal government’s investments in AI research, hadn’t been updated since the Trump Administration (when he gutted the OSTP staffing levels). The plan seeks to promote responsible innovation in the field that serves the public good without infringing on the public’s rights, safety and democratic values, having done so until this point through eight core strategies. Tuesday’s update adds a ninth, establishing “a principled and coordinated approach to international collaboration in AI research,” per the White House.
“The federal government plays a critical role in ensuring that technologies like AI are developed responsibly, and to serve the American people,” the OSTP argued in its release. “Federal investments over many decades have facilitated many key discoveries in AI innovations that power industry and society today, and federally funded research has sustained progress in AI throughout the field’s evolution.”
The OSTP also wants to hear the publics thoughts on both its new strategies and the technology’s development in general. As such it is inviting “interested individuals and organizations” to submit their comments to one or more of nearly 30 prompt questions, including “How can AI rapidly identify cyber vulnerabilities in existing critical infrastructure systems and accelerate addressing them?” and “How can Federal agencies use shared pools of resources, expertise, and lessons learned to better leverage AI in government?” through the Federal eRulemaking Portal by 5:00 pm ET on July 7, 2023. Responses should be limited to 10 pages of 11-point font.
The Department of Education also released its report on the promises and pitfalls of AI in schools on Tuesday, focusing on the how it impacts Learning, Teaching, Assessment, and Research. Despite recent media hysteria about generative AIs like ChatGPT fomenting the destruction of higher education by helping students write their essays, the DoE noted that AI “can enable new forms of interaction between educators and students, help educators address variability in learning, increase feedback loops, and support educators.”
This article originally appeared on Engadget at https://www.engadget.com/white-house-reveals-its-next-steps-towards-responsible-ai-development-190636857.html?src=rss
There are vanishingly few places in Microsoft’s business ecosystem that remain untouched by January’s OpenAI deal, with GPT-4 backed chatbot and generative capabilities coming to Office products like Word and Excel, Bing Search, and integrated directly into the Edge browser. During the Microsoft Build 2023 conference on Tuesday, company executives clarified and confirmed that its 365 Copilot AI — the same one going into Office — will be “natively integrated” into the Edge browser.
Microsoft 365 Copilot essentially takes all of your Graph information — data from your Calendar, Word docs, emails and chat logs — and smashes them together, using the informatic slurry in training an array of large language models, to provide AI-backed assistance personalized to your business.
“You can type natural language requests like ‘Tell my team how we updated the product strategy today,'” Lindsay Kubasik, Group Product Manager, Edge Enterprise wrote in a Tuesday blog post. “Microsoft 365 Copilot will generate a status update based on the morning’s meetings, emails and chat threads.”
By integrating 365 Copilot into the browser itself, users will be able to request additional context even more directly. “As you’re looking at a file your colleague shared, you can simply ask, ‘What are the key takeaways from this document?’” and get answers from 365 Copilot in real-time. Even on-page search (ctrl+F) is getting smarter thanks to the deeper integration. The company is also incorporating the same open plugin standard launched by OpenAI, ensuring interoperability between ChatGPT and 365 Copilot products.
But it’s not ready for rollout just yet and there’s no word on when that will change. “Microsoft 365 Copilot is currently in private preview,” a Microsoft rep told Engadget. “Microsoft 365 Copilot will be natively integrated into Microsoft Edge, and we will have more to share at a later date.”
On the other hand, Microsoft’s digital co-working product, Edge Workspaces, will be moving out of preview altogether in the coming months, Kubasik noted. Workspaces allows teams to share links, project websites and working files as a shared set of secured browser tabs. Furthermore, the company is “evolving” its existing work experience into Microsoft Edge for Business. This will include unique visual elements and cues — which should begin rolling out to users today — along with “enterprise controls, security, and productivity features” designed to help keep remote workers’ private lives better separated from their work lives.
The company recognizes the need for “a new browser model that enhances users’ privacy while maintaining crucial, enterprise grade controls set at the organizational level,” Kubasik wrote. “Microsoft Edge for Business honors the needs of both end users and IT Pros as the browser that automatically separates work and personal browsing into dedicated browser windows with their own separate caches and storage locations, so information stays separate.”
Microsoft Edge for Business enters preview today on managed devices. If your organization isn’t already using the Edge ecosystem, fear not, a preview for unmanaged devices is in the works for the coming months.
This article originally appeared on Engadget at https://www.engadget.com/microsoft-confirms-365-copilot-ai-will-be-natively-integrated-into-edge-150007852.html?src=rss
Google has stood at the forefront at many of the tech industry’s AI breakthroughs in recent years, Zoubin Ghahramani, Vice President of Google DeepMind, declared in a blog post while asserting that the company’s work in foundation models, are “the bedrock for the industry and the AI-powered products that billions of people use daily.” On Wednesday, Ghahramani and other Google executives took the Shoreline Amphitheater stage to show off its latest and greatest large language model, PaLM 2, which now comes in four sizes able to run locally on everything from mobile devices to server farms.
PaLM 2, obviously, is the successor to Google’s existing PaLM model that, until recently, powered its experimental Bard AI. “Think of PaLM as a general model that then can be fine tuned to achieve particular tasks,” he explained during a reporters call earlier in the week. “For example: health research teams have fine tuned PaLM with with medical knowledge to help answer questions and summarize insights from a variety of dense medical texts.” Ghahramani also notes that PaLM was “the first large language model to perform an expert level on the US medical licensing exam.”
Bard now runs on PaLM 2, which offers improved multilingual, reasoning, and coding capabilities, according to the company. The language model has been trained far more heavily on multilingual texts than its predecessor, covering more than 100 languages with improved understanding of cultural idioms and turns of phrase.
Even more impressive is that Google was able to spin off application-specific versions of the base PaLM system dubbed Gecko, Otter, Bison and Unicorn.
“We built PaLM to to be smaller, faster and more efficient, while increasing its capability,” Ghahramani said. “We then distilled this into a family of models in a wide range of sizes so the lightest model can run as an interactive application on mobile devices on the latest Samsung Galaxy.” In all, Google is announcing more than two dozen products that will feature PaLM capabilities at Wednesday’s I/O event
This is a developing story. Please check back for updates.
This article originally appeared on Engadget at https://www.engadget.com/google-unveils-its-multilingual-code-generating-palm-2-language-model-180805304.html?src=rss
For the past two months, anybody wanting to try out Google’s new chatbot AI, Bard, had to first register their interest and join a waitlist before being granted access. On Wednesday, the company announced that those days are over. Bard will immediately be dropping the waitlist requirement as it expands to 180 additional countries and territories. What’s more, this expanded Bard will be built atop Google’s newest Large Language Model, PaLM 2, making it more capable than ever before.
Google hurriedly released the first generation Bard back in February after OpenAI’s ChatGPT came out of nowhere and began eating the industry’s collective lunch like Gulliver in a Lilliputian cafeteria. Matters were made worse when Bard’s initial performances proved less than impressive — especially given Google’s generally accepted status at the forefront of AI development — which hurt both Google’s public image and its bottom line. In the intervening months, the company has worked to further develop PaLM, the language model that essentially powers Bard, allowing it to produce better quality and higher-fidelity responses, as well as perform new tasks like generating programming code.
As Google executives announced at the company’s I/O 2023 keynote on Wednesday, Bard has been switched over to then new PaLM 2 platform. As such, users can expect a bevy of new features and functions to roll out in the coming days and weeks. Features like a higher degree of visual responses to your queries, so when you ask for “must see sights” in New Orleans, you’ll be presented with images of the sites you’d see, more than just a bullet list or text-based description. Conversely, users will be able to more easily input images to Bard alongside their written queries, bringing Google Lens capabilities to Bard.
Even as Google mixes and matches AI capabilities amongst its products — 25 new offerings running on PaLM 2 are being announced today alone — the company is looking to ally with other industry leaders to further augment Bard’s abilities. Google announced on Wednesday that it is partnering with Adobe to bring its Firefly generative AI to Bard as a means to counter Microsoft’s BingChat-DallE2 offering.
Finally, Google shared that it will be implementing a number of changes and updates in response to feedback received from the community since launch. Clicking on a line of generated code or chatbot answer and Bard will provide a link to that specific bit’s source. There will be a new Dark theme. And, the company is working to add an export feature so that users can easily run generated programming code on Replit or toss their generated works into Docs or Gmail.
Follow all of the news from Google I/O 2023 right here.
This article originally appeared on Engadget at https://www.engadget.com/google-bard-transitions-to-palm-2-and-expands-to-180-countries-172908926.html?src=rss