AI assistants are making headlines once again with OpenAI and Google announcing new developments at their respective events. OpenAI unveiled its latest flagship model, GPT-4o, which can reason across audio, vision, and text in real time. The model is impressive for its ability to code and detect errors with speed and accuracy. However, concerns about job loss due to generative AI advancements persist as Microsoft reported that 49% of its polled participants feared losing their jobs last year.
Google I/O introduced AI Overviews, a new feature aimed at changing the way the web works. The tech industry has been moving towards creating all-knowing, ultra-helpful virtual assistants for quite some time now. With advancements in AI technology, these assistants could potentially revolutionize computing and make tasks easier for users.
Despite these exciting developments, safety concerns have arisen following the resignation of OpenAI's co-founder and chief scientist, Ilya Sutskever, along with his chief team leader Jan Leike. The departures sparked speculation about potential safety issues within the company. Additionally, former employees are bound by restrictive off-boarding agreements containing nondisclosure and non-disparagement provisions that forbid them from criticizing the company or even acknowledging the existence of these NDAs.
OpenAI's unique corporate structure was meant to increase accountability with a capped-profit company ultimately controlled by a nonprofit. However, safety concerns led to OpenAI abandoning open-sourcing their models and shedding senior members of its safety team.
It is important to note that while these developments are promising, they also raise concerns about the potential impact on employment and privacy. As AI technology continues to advance, it will be crucial for companies and policymakers to address these issues in a responsible manner.