Today, Microsoft will have its Surface Event in New York City, during which several new items are anticipated to be released. The tech giant may also present some of the new AI capabilities coming to Windows 11.
When will the Surface Event take place
On September 21, the Surface occasion is booked to start in New York City at 7 AM PST or 7:30 PM (India time), and a video of the occasion will be made accessible online at 10:30 PM (India time) on the Microsoft occasion site. Microsoft’s Chief Satya Nadella is likewise expected to go to the occasion, which will be the organization’s most memorable Surface occasion held face to face since the plague started.
What to expect from the Surface event
AI Features to Windows 11:
The Windows 11 upgrade, code-named 23H2, which might include Windows Copilot, an AI-powered personal assistant, is anticipated to be released by Microsoft. By the end of September, the new update—which is presently being tested by Windows Insiders—is anticipated to be available.
According to Windows Central, Microsoft has already lined up several third-party developers, like Adobe, Mem, and Spotify, who will enable third-party plugins for Windows Copilot.
New Surface devices:
A new version of the Surace Laptop Studio from Microsoft is anticipated to be unveiled. It will incorporate an Intel thirteenth era processor, a Nvidia RTX 4060 illustrations card, and 64 GB of DDR5 SDRAM. The Surface PC Studio 2 will, as per a case by The Edge, have a plan that is indistinguishable from the main model, with a showcase that slides forward to change over the PC into a tablet.
The Surface Go 4, which could be fueled by an Intel N200 chip, is one more thing that is expected to be disclosed during the occasion today, as per Windows Focal. According to the source, Microsoft has also changed the new Surface Go’s internal organization to make it easier to fix.
Follow Digital Fox Media for latest technology news.
OpenAI Leads Despite Google Gemini’s Impressive Show
Though these are tough times for Alphabet Inc.’s Google, the days between Thanksgiving and Christmas are generally a dead sea for new technology launches.ChatGPT caught the hulking search behemoth off guard a year ago, and since then, it has been keen to portray itself as a formidable force. Wednesday, following rumors of a hold-up, it unexpectedly unveiled Gemini, a novel artificial intelligence model capable of identifying deceptive behavior and passing an accounting test. Social media is in awe of a demo video that Google posted (see below), but it’s a sham. Google is still lagging behind OpenAI in terms of technological capabilities. Let us begin by discussing the specifics. The table that Google published compares Gemini to GPT-4, the top model from OpenAI.Gemini Ultra (in blue) outperforms GPT-4 on the majority of common benchmarks, according to Google’s table. These evaluate AI models using situations from professional law, high school physics, and morality, and these kinds of tests define nearly all the skills in the present AI race.
However, Gemini Ultra just narrowly defeated OpenAI’s GPT-4 model on the majority of the benchmarks. Put differently, the best AI model on Google has only slightly improved upon a task that OpenAI finished working on at least a year ago. Furthermore, Ultra is still a secret. As per Google’s suggestion, the Gemini Ultra could not hold the top spot for very long if it is introduced in early January. With Google taking longer to catch up to OpenAI, the agiler player has been working on GPT-5, its next AI model, for about a year. Then there’s the video demo on X, the website that was once known as Twitter, that tech experts called “jaw-dropping” below After just one look, this is amazing stuff. The model demonstrates glimpses of the reasoning skills that Google’s DeepMind AI group has developed over the years. Examples of these skills include tracking a ball of paper from under a plastic cup and determining that a dot-to-dot graphic was a crab before it was ever made. That’s what other AI models lack. But as Wharton professor Ethan Mollick has shown here and here, many of the other features on display are not special and can be duplicated using ChatGPT Plus.
Google acknowledges that the video has been altered. Its YouTube description reads, “Latency has been reduced and Gemini outputs have been shortened for brevity for this demo.” This indicates that in reality, it took longer than it did on the video for each response. In actuality, neither real-time nor voice delivery was used in the demo. A Google representative responded to a question from Bloomberg Opinion on the video by saying that it was created “using still image frames from the footage, and prompting via text.” They also provided a link to a website that demonstrated how other people might use drawings, hands, or other things to interact with Gemini. Stated differently, the voice in the demo was displaying still pictures to Gemini while reading out human-made suggestions. That’s very different from what Google appeared to be implying, which was that Gemini could monitor and react to its surroundings in real-time while a person could have a seamless spoken conversation with it.
Additionally, the demo appears to be using the yet-to-be-released Gemini Ultra model, but this is not made clear in the video. Hiding such information reveals this larger marketing campaign: Google wants us to keep in mind that it has access to more data than anybody else and employs some of the biggest teams of AI researchers worldwide. It wants to show us how big its deployment network is, as it did on Wednesday, by introducing Gemini to Chrome, Android, and Pixel phones in less powerful versions. However, in IT, being everywhere isn’t necessarily a benefit. In the 2000s, early mobile industry leaders Nokia Oyj and Blackberry Ltd. had to learn this lesson the hard way when Apple entered the market with the iPhone, a more competent and user-friendly device and stole their lunches. The top-performing systems are what drive software’s commercial success.
It’s quite likely that Google is timing its extravagance to take advantage of the recent unrest within OpenAI. According to a Wall Street Journal story, Google quickly started a sales drive to convince OpenAI’s corporate clients to transfer to Google after a board revolt at the smaller AI firm temporarily fired CEO Sam Altman and cast doubt on the company’s future. With the debut of Gemini, it appears to be riding that wave of uncertainty at the moment. However, eye-catching demos are only so good, and Google has already shown off amazing new technology that was never used. Up until now, Google has been unable to release products with the same agility as OpenAI due to its massive bureaucracy and several tiers of product managers. That’s not a terrible thing, as society struggles with AI’s revolutionary implications. But consider with a grain of salt Google’s most recent demonstration of speeding forward. It is still approaching from the back.
Google Gemini AI Video:
Google recently recreated portions of the GPT-4 competition Gemini viral duck video in a demo film. It was acknowledged by Google that the video “Hands-on with Gemini: Interacting with Multimodal AI, “Nevertheless, the AI and human did not communicate verbally. The demo was done “using still images from the footage and prompting via text,” as opposed to having Gemini react to or predict a painting or change in things on the table in real-time. The purpose of the video is to deceive viewers into thinking that Gemini is capable. The lack of caveats on the real process by which inputs are created makes the video fairly dubious. We are pleased with the attention our “Hands-on with Gemini” video has received. “We dissected the utilization of Gemini in its creation in our developer blog yesterday,” stated Oriol Vinyals, Google DeepMind’s VP of Research & Deep Learning Lead. Co-leader Gemini, in an X post.“We presented Gemini with sequences of several modalities—text and picture in this instance—and asked it to anticipate possible outcomes. On December 13, when access to Pro opens, developers can attempt similar things. Ultra was utilized in the knitting demo,” Vinyals continued.
“All the user prompts and outputs in the video are real, shortened for brevity. The video illustrates what the multimodal user experiences built with Gemini could look like. We made it to inspire developers,” said, Vinyals.“When you’re building an app, you can get similar results (there’s always some variability with LLMs) by prompting Gemini with an instruction that allows the user to “configure” the behavior of the model, like inputting “you are a science expert…” before a user can engage in the same kind of back and forth dialogue. Here’s a clip of what this looks like in AI Studio with Gemini Pro. We’ve come a long way since Flamingo & PALI, looking forward to seeing what people build with it,” the VP further added.The original viral video narrated an evolving sketch of a duck from a squiggle to a completed drawing, which it says is an unrealistic color, then evinces surprise (“What the quack!”) when seeing a toy blue duck.
It then responds to various voice queries about that toy, and then the demo moves on to other show-off moves, like tracking a ball in a cup-switching game, recognizing shadow puppet gestures, reordering sketches of planets, and so on. The original viral video narrated a developing illustration of a duck from a squiggle to a completed drawing. To which Gemini responds that the duck is of an unrealistic color, then exhibits surprise (“What the quack!”) when seeing a toy blue duck. It then answers to voice queries about that toy. The demo moves on to other show-off moves, like tracking a ball in a cup-switching game, identifying shadow puppet gestures, rearranging sketches of planets, and more.
Google’s Duplex Demo: History of Faking It
It’s not the first time Google’s demo videos have been questioned. In the past, the tech giant faced doubts about the legitimacy of its Duplex demo, which featured an AI assistant making reservations at hair salons and restaurants. During a demo, Google Duplex was shown to be able to make reservations at a restaurant, book hair appointments, and even book travel. Several journalists and experts concluded after Google demonstrated Google Duplex that the demonstration was not authentic but rather a set-up. The calls and tasks executed by Google Duplex were considered fake, according to various media reports. The reason behind it being fake was that there was background noise made during the calls, among other suspicions.
Also Read: “Microsoft-OpenAI Partnership and CMA Investigation: Shaping the Future of AI Industry Dynamics”
while Google’s recent unveiling of Gemini, their new AI model, has generated significant buzz, it’s crucial to approach these developments with a healthy dose of skepticism. The Gemini AI, particularly its Ultra version, boasts some advancements and shows promise in certain benchmarks when compared to OpenAI’s GPT-4. However, the margin of improvement is narrow, and the full capabilities of Ultra remain somewhat shrouded in mystery. The much-discussed demo video, while impressive at first glance, has raised questions about the actual real-time capabilities of Gemini. Google’s admission that the video was altered for brevity and did not feature real-time interaction or voice delivery indicates a gap between the showcased potential and the current reality. This revelation highlights the importance of transparency in AI development and the need to differentiate between genuine technological breakthroughs and carefully crafted demonstrations.
Furthermore, Google’s strategy of integrating Gemini into its broader ecosystem, while showcasing its extensive data access and AI research capabilities, doesn’t necessarily guarantee success. History has shown that market dominance in technology isn’t just about widespread deployment but also hinges on the actual performance and user-friendliness of the product. As the AI landscape continues to evolve rapidly, with companies like OpenAI and Google pushing the boundaries, observers and potential users need to remain critical and informed. The true test for AI technologies like Gemini lies not just in winning short-term attention with flashy demos but in delivering sustainable, effective solutions that address real-world problems and ethical considerations in the long run.
“Microsoft-OpenAI Partnership and CMA Investigation: Shaping the Future of AI Industry Dynamics”
The IT industry has been closely monitoring the most recent developments in the partnership among Microsoft and OpenAI, particularly after the UK’s Opposition and Markets Authority (CMA) proclaimed that it will examine whether the coordinated effort between the two organizations comprises a “obtaining of control.” This analysis is conducted against the backdrop of dynamics elements in the artificial intelligence (AI) field, where collaborations are altering the rules of engagement.The major international IT corporation Microsoft invested over $10 billion in OpenAI, gaining a 49% ownership position in the AI startup as a consequence. This alliance, which represents a turning point in Microsoft’s AI strategy, aims to influence the course of AI technologies going forward in addition to making a financial commitment. Brad Smith, President of Microsoft, has made it clear that their partnership with OpenAI is not the same as an acquisition.The recent change, as he pointed out, is the inclusion of a non-voting observer from Microsoft on OpenAI’s board. This role allows Microsoft to access critical information without having direct voting rights, a position that Smith argues is distinctly different from an outright acquisition.
The CMA’s interest in this partnership underscores the increasing scrutiny of big tech collaborations and acquisitions, especially in the high-stakes world of AI. The regulator’s primary concern appears to be whether this partnership gives Microsoft a material influence over OpenAI, altering the competitive dynamics in the AI industry. The assessment is set to explore the nature of the changes in OpenAI’s governance and their implications on competition and market control.This situation draws parallels with other significant tech acquisitions, such as Google’s purchase of DeepMind, although Microsoft maintains that its relationship with OpenAI is fundamentally different. The tech giant has expressed its willingness to cooperate fully with the CMA, providing all necessary information to facilitate the review.
The unfolding events surrounding OpenAI’s governance are equally intriguing. OpenAI CEO Sam Altman’s brief ousting from the board, followed by a dramatic reinstatement, indicates the complexities and shifting power dynamics within the organization. Microsoft CEO Satya Nadella’s announcement of Altman joining Microsoft’s AI team, only for him to be reinstated as OpenAI chief, further adds to the narrative of a fluid and evolving partnership.
The CMA’s investigation is a reflection of the broader concerns surrounding big tech companies and their influence over pioneering technologies like AI. As AI continues to transform various sectors, the partnerships and investments by tech giants are being closely watched by regulators and industry observers. The outcome of this assessment could set a precedent for how future tech collaborations, especially in AI, are perceived and regulated. The Microsoft-OpenAI partnership is a landmark development in the AI domain, signifying the increasing convergence of big tech and cutting-edge AI research. While this collaboration holds immense potential for advancing AI technologies, it also raises critical questions about market control, competition, and the future of AI governance. As the CMA conducts its assessment, the tech world eagerly awaits its findings, which are likely to have far-reaching implications for the AI industry and beyond.
Examining the Impact on AI Industry Dynamics
Shift in Competitive Landscape
TThe partnership between Microsoft and OpenAI, along with the UK’s Competition and Markets Authority’s (CMA) following investigation, is a glaring example of how the AI business is changing. The competitive landscape of the sector is changing dramatically as IT giants like Microsoft make large investments in AI companies. This partnership, in particular, exemplifies the growing trend of consolidation in the AI sector, where large corporations are increasingly extending their influence over innovative AI startups. The CMA’s investigation into whether Microsoft’s role in OpenAI constitutes an “acquisition of control” is crucial in determining the future balance of power in the AI market.
Regulatory Scrutiny and Market Monopolization Concerns
The involvement of the CMA highlights the growing concern among regulators regarding the potential for monopolization in the AI industry. As AI technology becomes more integral to various sectors, ensuring a competitive environment that fosters innovation and prevents market dominance by a few players is critical. The investigation aims to assess the impact of Microsoft’s investment and its non-voting observer status on OpenAI’s board, exploring if this grants Microsoft undue influence over OpenAI’s operations and decision-making. This scrutiny reflects a broader regulatory effort to maintain market fairness in the rapidly evolving tech landscape.
Implications for AI Innovation and Collaboration
Microsoft’s significant investment in OpenAI, coupled with a strategic position on the board, has implications for the direction of AI innovation. With a major tech player involved, OpenAI’s research and development trajectories might align more closely with Microsoft’s business objectives, potentially influencing the types of AI technologies prioritized and developed. While this partnership could accelerate AI advancements by leveraging Microsoft’s resources, it also raises questions about the diversity and independence of AI research. The CMA’s assessment will thus play a crucial role in understanding how such collaborations impact the broader AI innovation ecosystem.
The Future of AI Governance
The unfolding events surrounding OpenAI’s governance, including the brief ousting and reinstatement of CEO Sam Altman, underscore the complexities of governing AI entities with significant corporate investments. These developments hint at the challenges in balancing corporate interests with the broader goals of AI ethics and societal benefit. As AI technology becomes increasingly influential, establishing transparent and responsible governance structures is essential. The CMA’s investigation into the Microsoft-OpenAI partnership could provide insights into effective governance models for AI organizations, especially those with significant corporate backing.
Navigating Challenges and Opportunities
Balancing Corporate Influence and AI Autonomy
The intricate relationship between Microsoft and OpenAI presents a challenge in maintaining a balance between corporate influence and the autonomy of AI research. While Microsoft’s investment and board observer status bring substantial resources and expertise, they also raise concerns about the independence of OpenAI’s decision-making and research directions. The challenge lies in ensuring that such corporate partnerships do not stifle innovation or lead to a monopolization of AI advancements. Ensuring that OpenAI continues to operate with a degree of autonomy is crucial for the diversity and health of the broader AI ecosystem.
Antitrust Considerations in the AI Arena
The CMA’s investigation into Microsoft and OpenAI’s partnership underscores the growing importance of antitrust considerations in the AI industry. As AI technologies become more central to economic and societal functions, the need for regulatory bodies to monitor and manage the influence of large corporations becomes increasingly critical. This case could set a precedent for how future collaborations in the AI space are viewed and regulated, particularly in terms of maintaining healthy competition and preventing market monopolization.
Fostering Ethical AI Development
Another aspect of this partnership is the need to foster ethical AI development. With Microsoft’s increased involvement in OpenAI, ensuring that AI technologies are developed and deployed responsibly becomes even more significant. This includes considerations around AI bias, transparency, and the impact of AI on jobs and society. The CMA’s review, while focused on market control and competition, indirectly touches upon these broader implications of AI development and the role of big tech companies in shaping ethical standards.
The Global AI Race and Geopolitical Implications
The Microsoft-OpenAI partnership also has geopolitical implications, particularly in the context of the global AI race. As nations and corporations compete for dominance in AI technology, partnerships like this one could shift the balance of power in the global tech landscape. The outcome of the CMA’s investigation may influence international perceptions and strategies around AI development, potentially affecting global collaborations and tech diplomacy.
The alliance between Microsoft and OpenAI, together with the UK’s Competition and Markets Authority’s (CMA) close examination of it, highlight the significant influence that business alliances have on the AI sector. It represents the changing landscape of a sector in which tech behemoths are increasingly influencing the direction of AI innovation and study. This development has far-reaching implications for competitive landscapes, regulatory frameworks, and the future of AI governance. The CMA’s investigation serves as a barometer for how regulators worldwide are navigating the complex terrain of AI. It highlights the need to strike a delicate balance between fostering innovation and preventing monopolization, ensuring that the AI ecosystem remains diverse and vibrant. The outcome of this investigation may set a precedent for how future partnerships and acquisitions in AI are assessed, emphasizing the importance of maintaining fair competition.
Google removed 17 apps from the Play Store because they catered to Indian users
Google has eliminated 17 applications from the Play Store that were intended for Indian users and used data harvesting and predatory lending techniques. These apps, which researchers have dubbed “SpyLoan” apps, were created to take advantage of people’s faith in reputable lenders.In an investigation published today, ESET Research claims that these malicious applications deceived users into giving them broad access rights to their personal information. After they are set up, the programs will make use of a variety of data, including contact lists, past browsing activity, SMS messages, and imagery. This data was used after loans with exorbitant interest rates were obtained from victims through blackmail and harassment. According to reports, the aforementioned applications were accessible in the following nations: the US, Mexico, the nation of Indonesia, the nation of Colombia, Kenya Egypt, for example, the nation of Pakistan, the city of Singapore, Philippines, Nigeria, and Thai. An estimated 12 million people downloaded these apps before the Google Play store removed them, based on researchers.Researchers have found that by masquerading as reputable lenders, SpyLoan applications fooled users into downloading them. After installation, these apps unintentionally granted themselves broad permissions, which allowed them access to users’ personal data. After that, the victims were blackmailed into paying astronomical interest rates on radically shortened repayment terms, making repayment all but impossible. These predatory apps essentially prey on the desperate individuals who require immediate financial support.
High returns were paid by victims:
Furthermore, the actual annual cost of the loans (TAC) is substantially higher than advertised, and the repayment period is substantially shorter than what is offered by reputable banks, according to victims of these loan applications.In addition, a number of borrowers faced coercion to settle their debts in the following five days, an improbable deadline for numerous individuals. Furthermore, the report disclosed that the actual yearly expenses of these loans varied greatly, ranging from 160% to 34%.According to reports, the effects of these SpyLoan apps have been disastrous for victims, and some have sadly resorted to suicide as a result of the extreme pressure to repay their loans.
People who were afraid of their loans being denied by dishonest loan apps were under tremendous pressure to divulge a lot of personal information. The strategy, which took advantage of users’ weaknesses, generated more than 12 million downloads globally prior to Google’s intervention. The applications’ deceptive tactics were more obvious as they forced users to divulge private information by pretending that loan approval procedures were in place. Serious privacy concerns were raised by this unethical practice, which brought attention to the apps’ questionable authenticity. After realizing the need to safeguard user privacy and data security, Google intervened as the situation got worse.Later, these apps were removed from the Play Store in an attempt to stop digital exploitation and safeguard user data. It is important to use caution when utilizing digital platforms, particularly when sharing personal information and conducting business, as this incident serves to highlight.
Google took the apps down from the Play Store
Over 200 SpyLoan apps were taken down from the Play Store by Google last year in an attempt to shield users from dangerous apps. The business admits that users can still download and install these potentially harmful apps far too easily in spite of this action. Google advises users to exercise caution and take preventative measures to ensure their safety. This action underscores the difficulty of regulating a large digital ecosystem and demonstrates Google’s continued commitment to user security. Although the company emphasizes the value of user awareness and caution, the removal of these apps is a part of a larger strategy to combat predatory software. Google wants to promote a safer online environment, so it advises users to take proactive measures for their digital safety. This circumstance emphasizes how difficult it is to keep app stores secure and how users and tech companies must work together to provide a safe online environment.
Ways to keep yourself safe
Here are some crucial steps you should take to prevent yourself from becoming a victim of such malicious apps:
Do a Comprehensive Study of Apps: Do your homework thoroughly before downloading any financial or loan apps. To verify an app’s legitimacy, check user reviews, ratings, and its online reputation.
Check the permissions for the app: Try not to give apps more permissions than you really need. Avoid downloading any app if it asks to access private information that doesn’t seem to be necessary for it to function.
Examine the Developer’s Details: When downloading any app, make sure you check the developer’s information. Reputable developers with a solid track record typically create legitimate apps.Use Trusted Sources: Make sure you only download apps from official stores like the Apple App Store or Google Play. Third-party apps may not go through rigorous security checks, so stay away from downloading them.
Remain Informed: Become knowledgeable about the most recent scams and security threats. Being aware of harmful activities is a great way to protect yourself from them.
Lifestyle5 days ago
Weakened Cyclone Michaung Set to Bring Rainfall Across Multiple States Today
Tech5 days ago
Meta Announces Discontinuation of Cross-App Chats Between Instagram and Messenger
Fashion5 days ago
Fans praised Khushi Kapoor’s wearing of Sridevi’s old dress as a beautiful tribute.
Lifestyle1 week ago
Michaung Intensifies into Severe Cyclonic Storm, Triggers Heavy Rainfall in Chennai – Latest Updates and Impacts
Entertainment3 days ago
K-pop sensation to make explosive return as wild card on Bigg Boss 17
Lifestyle6 days ago
Cyclone Michaung Update: Chennai Airport Fully Operational for Arrivals and Departures
Business3 days ago
RBI MPC Updates: Flexibility in Liquidity Reversal Permitted on Weekends, Announces Governor Das
Tech2 days ago
OpenAI Leads Despite Google Gemini’s Impressive Show