Technology

Explore top LinkedIn content from expert professionals.

  • View profile for Steve Suarez®

    Chief Executive Officer | Entrepreneur | Board Member | Senior Advisor McKinsey | Harvard & MIT Alumnus | Ex-HSBC | Ex-Bain

    47,273 followers

    A milestone in quantum physics — rooted in a student project What began as a student's undergraduate thesis at Caltech — later continued as a graduate student at MIT — has grown into a collaborative experiment between researchers from MIT, Caltech, Harvard, Fermilab, and Google Quantum AI. Using Google’s Sycamore quantum processor, the team simulated traversable wormhole dynamics — a quantum system that behaves analogously to how certain wormholes are predicted to work in theoretical physics. Here’s what they did: Implemented two coupled SYK-like quantum systems on the processor that represent black holes in a holographic model. Sent a quantum state into one system. Applied an effective “negative energy” pulse to make the simulated wormhole traversable. Observed the state emerge on the other side — consistent with quantum teleportation. This wasn’t just classical computer modeling — it ran on real qubits, using 164 two-qubit quantum gates across nine qubits. Why it matters: The results are consistent with the ER=EPR conjecture, which suggests a deep link between quantum entanglement and spacetime geometry. In the holographic picture, patterns of entanglement can be interpreted as wormhole-like “bridges.” This experiment shows how quantum processors can begin to probe aspects of quantum gravity in a laboratory setting, complementing astrophysical observations and theoretical work. While no physical wormhole was created, this is a step toward using quantum computers to explore some of the most fundamental questions in physics. What breakthrough in science excites you most? Share your thoughts below — and let’s discuss how quantum computing is reshaping our understanding of reality. ♻️ Repost to help people in your network. And follow me for more posts like this. CC: thebrighterside

  • View profile for Andreas Horn

    Head of AIOps @ IBM || Speaker | Lecturer | Advisor

    225,252 followers

    McKinsey & Company: "𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝗔𝗜 𝗿𝗲𝗾𝘂𝗶𝗿𝗲𝘀 𝗱𝗲𝗲𝗽 𝗜𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻 𝗶𝗻𝘁𝗼 𝘁𝗵𝗲 𝗘𝗻𝘁𝗲𝗿𝗽𝗿𝗶𝘀𝗲 𝗦𝘁𝗮𝗰𝗸". ⬇️ In its latest analysis, McKinsey illustrates how Generative AI, when properly integrated, can transform customer journeys — using the example of a travel agent bot (via AI Agent). A great example that proves: To succeed with GenAI, it's not enough to simply add a model. You have to rethink your entire system — end to end. 𝗛𝗼𝘄 𝗶𝘁 𝘄𝗼𝗿𝗸𝘀: 𝗠𝘂𝗹𝘁𝗶-𝗹𝗮𝘆𝗲𝗿𝗲𝗱 𝗚𝗲𝗻𝗔𝗜 𝗶𝗻𝘁𝗲𝗴𝗿𝗮𝘁𝗶𝗼𝗻⬇️ 𝟭. 𝗖𝘂𝘁𝗼𝗺𝗲𝗿 𝗟𝗮𝘆𝗲𝗿: → The user logs in, reviews options, and either completes the task or escalates to a live agent — all without needing to understand what’s happening behind the scenes. This is the experience layer where trust, speed, and personalization matter most. 𝟮. 𝗜𝗻𝘁𝗲𝗿𝗮𝗰𝘁𝗶𝗼𝗻 𝗟𝗮𝘆𝗲𝗿 → Manages the dialogue with the user: - Chatbot initiates and guides the conversation - Agent escalation is triggered when AI alone can’t resolve the issue 𝟯. 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝗔𝗜 𝗟𝗮𝘆𝗲𝗿: → Executes intelligent model actions based on context: - Pulls user data - Checks policies - Generates options - Executes next steps 𝟰. 𝗕𝗮𝗰𝗸𝗲𝗻𝗱 𝗔𝗽𝗽 𝗟𝗮𝘆𝗲𝗿 → Connects AI to core enterprise systems: - Authentication and identity services - Policy enforcement and booking workflows - Agent assignment logic 𝟱. 𝗗𝗮𝘁𝗮 𝗟𝗮𝘆𝗲𝗿 → Provides real-time contextual inputs: - Customer ID - Booking history - Policy rules - Agent directories 𝟲. 𝗜𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 𝗟𝗮𝘆𝗲𝗿 → Powers scale, performance, and governance: - Cloud or hybrid infrastructure - Model orchestration - Low-latency interaction support - Security and data governance 𝗕𝗼𝘁𝘁𝗼𝗺 𝗟𝗶𝗻𝗲 Enterprises won’t win with GenAI by treating it as a bolt-on feature. The real differentiators will be those who embed AI at every layer — from user interfaces to business logic, data pipelines, and infrastructure. AI integration is not a side project. It’s a re-architecture of the digital enterprise. The unlock isn’t more models. It’s deeper integration. Full study in the comments. 𝗜 𝗲𝘅𝗽𝗹𝗼𝗿𝗲 𝘁𝗵𝗲𝘀𝗲 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁𝘀 𝗮𝗿𝗼𝘂𝗻𝗱 𝗔𝗜 𝗮𝗴𝗲𝗻𝘁𝘀 — 𝗮𝗻𝗱 𝘄𝗵𝗮𝘁 𝘁𝗵𝗲𝘆 𝗺𝗲𝗮𝗻 𝗳𝗼𝗿 𝗿𝗲𝗮𝗹-𝘄𝗼𝗿𝗹𝗱 𝘂𝘀𝗲 𝗰𝗮𝘀𝗲𝘀 — 𝗶𝗻 𝗺𝘆 𝘄𝗲𝗲𝗸𝗹𝘆 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿. 𝗬𝗼𝘂 𝗰𝗮𝗻 𝘀𝘂𝗯𝘀𝗰𝗿𝗶𝗯𝗲 𝗵𝗲𝗿𝗲 𝗳𝗼𝗿 𝗳𝗿𝗲𝗲: https://lnkd.in/dbf74Y9E

  • View profile for CK Lim BSc, MBA

    Sustainability | Darwin’s Swiss Army Knife | Equities | Regional Leadership | P&L Management | Business & Digital Transformation | Media Trained

    27,628 followers

    SUSTAINABILITY LEADERSHIP SERIES - ENVIRONMENTAL AND SOCIAL PROFITS Here’s an excellent example how sustainability initiatives can be profitable both environmentally and socially. In 2017, the Batesville School District in Arkansas faced a $250,000 budget deficit and struggled to retain teachers due to low salaries. To address these challenges, the district conducted an energy audit and implemented energy efficiency measures, including the installation of over 1,400 solar panels across its facilities. These initiatives reduced the district's annual energy consumption by 1.6 million kilowatt-hours, transforming the budget deficit into a $1.8 million surplus over three years. The financial turnaround enabled the district to increase teacher salaries by up to $15,000, making Batesville one of the highest-paying districts in the region. Superintendent Michael Hester highlighted that the solar project not only improved teacher retention and recruitment but also provided educational opportunities for students to learn about renewable energy. This success story has inspired other school districts to explore similar renewable energy solutions to address financial constraints and invest in their educators. #LBFalumni #SkyHighTower #Sustainability Leadership Series (archived posts) --> https://lnkd.in/gmK6cbMV

  • View profile for Marie Stephen Leo

    Data & AI @ Sephora | Linkedin Top Voice

    15,624 followers

    LLM applications are frustratingly difficult to test due to their probabilistic nature. However, testing is crucial for customer-facing applications to ensure the reliability of generated answers. So, how does one effectively test an LLM app? Enter Confident AI's DeepEval: a comprehensive open-source LLM evaluation framework with excellent developer experience. Key features of DeepEval: - Ease of use: Very similar to writing unit tests with pytest. - Comprehensive suite of metrics: 14+ research-backed metrics for relevancy, hallucination, etc., including label-less standard metrics, which can quantify your bot's performance even without labeled ground truth! All you need is input and output from the bot. See the list of metrics and required data in the image below! - Custom Metrics: Tailor your evaluation process by defining your custom metrics as your business requires. - Synthetic data generator: Create an evaluation dataset synthetically to bootstrap your tests My recommendations for LLM evaluation: - Use OpenAI GPT4 as the metric model as much as possible. - Test Dataset Generation: Use the DeepEval Synthesizer to generate a comprehensive set of realistic questions! Bulk Evaluation: If you are running multiple metrics on multiple questions, generate the responses once, store them in a pandas data frame, and calculate all the metrics in bulk with parallelization. - Quantify hallucination: I love the faithfulness metric, which indicates how much of the generated output is factually consistent with the context provided by the retriever in RAG! CI/CD: Run these tests automatically in your CI/CD pipeline to ensure every code change and prompt change doesn't break anything. - Guardrails: Some high-speed tests can be run on every API call in a post-processor before responding to the user. Leave the slower tests for CI/CD. 🌟 DeepEval GitHub: https://lnkd.in/g9VzqPqZ 🔗 DeepEval Bulk evaluation: https://lnkd.in/g8DQ9JAh Let me know in the comments if you have other ways to test LLM output systematically! Follow me for more tips on building successful ML and LLM products! Medium: https://lnkd.in/g2jAJn5 X: https://lnkd.in/g_JbKEkM #generativeai #llm #nlp #artificialintelligence #mlops #llmops

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect | AI Engineer | Generative AI | Agentic AI

    699,733 followers

    Demystifying the Software Testing 1️⃣ 𝗙𝘂𝗻𝗰𝘁𝗶𝗼𝗻𝗮𝗹 𝗧𝗲𝘀𝘁𝗶𝗻𝗴: 𝗧𝗵𝗲 𝗕𝗮𝘀𝗶𝗰𝘀: Unit Testing: Isolating individual code units to ensure they work as expected. Think of it as testing each brick before building a wall. Integration Testing: Verifying how different modules work together. Imagine testing how the bricks fit into the wall. System Testing: Putting it all together, ensuring the entire system functions as designed. Now, test the whole building for stability and functionality. Acceptance Testing: The final hurdle! Here, users or stakeholders confirm the software meets their needs. Think of it as the grand opening ceremony for your building. 2️⃣ 𝗡𝗼𝗻-𝗙𝘂𝗻𝗰𝘁𝗶𝗼𝗻𝗮𝗹 𝗧𝗲𝘀𝘁𝗶𝗻𝗴: 𝗕𝗲𝘆𝗼𝗻𝗱 𝘁𝗵𝗲 𝗕𝗮𝘀𝗶𝗰𝘀: ️ Performance Testing: Assessing speed, responsiveness, and scalability under different loads. Imagine testing how many people your building can safely accommodate. Security Testing: Identifying and mitigating vulnerabilities to protect against cyberattacks. Think of it as installing security systems and testing their effectiveness. Usability Testing: Evaluating how easy and intuitive the software is to use. Imagine testing how user-friendly your building is for navigation and accessibility. 3️⃣ 𝗢𝘁𝗵𝗲𝗿 𝗧𝗲𝘀𝘁𝗶𝗻𝗴 𝗔𝘃𝗲𝗻𝘂𝗲𝘀: 𝗧𝗵𝗲 𝗦𝗽𝗲𝗰𝗶𝗮𝗹𝗶𝘇𝗲𝗱 𝗖𝗿𝗲𝘄: Regression Testing: Ensuring new changes haven't broken existing functionality. Imagine checking your building for cracks after renovations. Smoke Testing: A quick sanity check to ensure basic functionality before further testing. Think of turning on the lights and checking for basic systems functionality before a deeper inspection. Exploratory Testing: Unstructured, creative testing to uncover unexpected issues. Imagine a detective searching for hidden clues in your building. Have I overlooked anything? Please share your thoughts—your insights are priceless to me.

  • View profile for Eric Schmidt
    Eric Schmidt Eric Schmidt is an Influencer

    Former CEO and Chairman, Google; Chair and CEO of Relativity Space

    74,542 followers

    Today, the United States Air Force unveiled "Model One," a groundbreaking initiative using "digital twins"—computerized simulators that replicate real-world systems with incredible accuracy. This marks a major leap in innovation and digital transformation. The conflict in Ukraine has shown how crucial software and data are in modern warfare. Digital twins will transform not just military strategies but also industries like healthcare and agriculture. I co-authored an op-ed in the The Wall Street Journal with Will Roper that explores this. We draw parallels with Formula 1, where digital engineering has revolutionized car designs, highlighting the rapid impact of data-driven innovation. AI trained through these simulations promises to unlock new potential in both military and civilian sectors. As we advance, the importance of cybersecurity and digital trust is paramount. Ensuring the safety and reliability of our digital and physical worlds is critical. Model One represents a bold step into the future, with transformative potential for many sectors. I invite you to read the full op-ed to explore how digital twins could shape our world and drive us toward an exciting, innovative future. https://lnkd.in/eigWSPjq #Innovation #DigitalTwins #AI #Technology #FutureTech #CyberSecurity #ModelOne #USAirForce #DigitalTransformation #Ukraine

  • View profile for Dr. Barry Scannell
    Dr. Barry Scannell Dr. Barry Scannell is an Influencer

    AI Law & Policy | Partner in Leading Irish Law Firm William Fry | Member of Irish Government’s Artificial Intelligence Advisory Council | PhD in AI & Copyright | LinkedIn Top Voice in AI | Global Top 200 AI Leaders 2025

    57,078 followers

    15 weeks left before the first rules of the AI Act come into effect. Struggling with where to start on AI implementation and compliance? Start with a multidisciplinary team; conduct an AI inventory; carry out AI Impact Assessments; draft AI policies; amend contracts, policies, and data protection documents to reflect AI’s role in your organisation. Ensure your team is trained in AI literacy, as required under the AI Act. To navigate AI implementation and compliance under the EU AI Act, companies must begin by understanding its scope and risk-based approach. The Act categorises AI systems into prohibited, high-risk, or general-purpose. Prohibited AI systems (the first rules coming in) include those exploiting vulnerabilities or engaging in certain AI emotional recognition. High-risk systems, such as those used in management of critical infrastructure, require strict oversight, including documentation, risk assessments, and ongoing monitoring. General-purpose AI systems, widely used across industries, may also face regulatory scrutiny due to their broad impact. The first step for companies is conducting a comprehensive AI inventory. This involves cataloguing all AI systems in use or under development to determine their classification under the AI Act. Through this inventory, companies can assess their compliance obligations and identify any systems that may need modification or discontinuation to meet the Act’s standards. Data protection is a cornerstone of AI compliance. The AI Act mandates that data used in AI systems be high quality, representative, and free from bias. This is especially crucial for high-risk systems, which must undergo continuous risk assessments to protect fundamental rights. GDPR compliance is also essential for any AI system that processes personal data, and companies must ensure their data governance strategies focus on transparency, accountability, and safeguarding individual rights. Contracts are a critical component of AI implementation. Organisations must revisit and amend contracts to address how AI impacts their legal and operational frameworks. These amendments should explicitly cover liability for AI-generated decisions, intellectual property ownership of AI-generated outputs, and data protection compliance. Contracts must minimise legal exposure. Additionally, intellectual property issues around AI, such as ownership of outputs or the use of third-party data, should be clearly defined in these agreements. Following the AI inventory, companies must conduct an AI impact assessment. This assessment includes both a Data Protection Impact Assessment (DPIA) and a Fundamental Rights Impact Assessment (FRIA). The extraterritorial scope of the AI Act means that even non-EU companies must comply if their AI systems impact the EU market. Non-compliance can result in significant fines, making early compliance essential. 15 weeks left to comply.

  • View profile for Deepak Pareek

    Forbes featured Rain Maker, Influencer, Key Note Speaker, Investor, Mentor, Ecosystem creator focused on AgTech, FoodTech, CleanTech. A Farmer, Technology Pioneer - World Economic Forum, and an Author.

    45,672 followers

    AgriStack: The Next Digital Revolution or Another Pipe Dream? The government’s latest digital public infrastructure (DPI) project, AgriStack, aims to transform India’s agriculture sector. In the Union Budget 2024-25, Finance Minister Nirmala Sitharaman announced the implementation of AgriStack, a digital public infrastructure (DPI) for agriculture, over the next three years, integrating over 6 crore farmers into a formal land registry system. This follows the previous year's budget announcement of a Digital Public Infrastructure for Agriculture (DPIA), aimed at providing inclusive, farmer-centric solutions, enhancing access to farm inputs, credit, insurance, crop planning, market intelligence, and supporting agri-tech growth. The AgriStack initiative began in 2021 with the Ministry of Agriculture & Farmers Welfare, Government of India forming a task force to develop a digital public infrastructure framework. I also had privilege of participating extensively in the deliberations. This led to the India Digital Ecosystem Architecture (IDEA) and the creation of the Unified Farmers Service platform. IDEA was outcome of the foundation laid by the World Economic Forum's flagship program Artificial Intelligence for Agriculture Innovations (AI4AI). AgriStack seeks to revolutionize agriculture through advanced digital technologies, creating a unified platform consolidating various agricultural data sets. By leveraging data analytics, artificial intelligence, and other digital tools, it aims to enhance productivity, ensure better market access, and promote sustainable practices. One key feature is the creation of a unique digital ID for each farmer, linking comprehensive data sets including land records, crop patterns, soil health, weather forecasts, and access to credit and insurance. This centralization aims to provide tailored advice and facilitate direct benefit transfers to farmers. AgriStack's development started in 2021, with pilots in various states refining the system. Integrating applications like the government’s e-NAM, ITC Limited’s eChoupal, and NCDEX’s NeML, AgriStack promises comprehensive information on weather, supply chains, and warehousing. MoUs with companies like Microsoft, Cisco, Jio, Amazon, and Esri have further bolstered its development. AgriStack, part of the Digital India initiative 2015, has been in discussion since 2020. Its success hinges on overcoming challenges like the digital divide, data standards, governance mechanisms, privacy concerns, and last but not the least availability of adequate budget. Only time will tell if AgriStack can realize its transformative potential in the agricultural sector.

  • View profile for Santiago Valdarrama

    Computer scientist and writer. I teach hard-core Machine Learning at ml.school.

    120,694 followers

    The first open-source implementation of the paper that will change automatic test generation is now available! In February, Meta published a paper introducing a tool to automatically increase test coverage, guaranteeing improvements over an existing code base. This is a big deal, but Meta didn't release the code. Fortunately, we now have Cover-Agent, an open-source tool you can install that implements Meta's paper to generate unit tests automatically: https://lnkd.in/eCitDjin I recorded a quick video showing Cover-Agent in action. There are two things I want to mention: 1. Automatically generating unit tests is not new, but doing it right is difficult. If you ask ChatGPT to do it, you'll get duplicate, non-working, and meaningless tests that don't improve your code. Meta's solution only generates unique tests that run and increase code coverage. 2. People who write tests before writing the code (TDD) will find this less helpful. That's okay. Not everyone does TDD, but we all need to improve test coverage. There are many good and bad applications of AI, but this is one I'm looking forward to make part of my life.

  • View profile for Eve Tamme
    Eve Tamme Eve Tamme is an Influencer

    Senior Advisor, Climate Policy │ Chair │ Board Member │ Carbon Markets │ Carbon Removal │ Carbon Capture •Personal views•

    29,536 followers

    This week, the International Energy Agency (IEA) launched a major report on #CCUS policies and business models. It's the most comprehensive piece I've seen so far, and I'm glad to have contributed as one of the reviewers. The report provides a detailed overview of what exists in the policy landscape and what is missing. I warmly recommend to have a look. Some general messages: • CCUS is expected to contribute 8% of emission reductions by 2050 + #carbonremoval from the application of CCUS technologies • More than 400 projects have been announced across the value chain over the last three years, but the deployment has remained relatively flat. The long lead times (median around six years) must be urgently reduced. • The current project pipeline would only deliver a third of what's needed globally by 2030. The policymakers need to create the conditions for the industry to make the projects happen. • New part-chain business models are emerging where separate entities specialise in different parts of the CCUS value chain. • The oil and gas sector continues to play a role, and new specialised players are entering the market. These are chemical and engineering companies providing CO2 capture solutions and infrastructure, shipping companies expanding their portfolio, and new companies focusing exclusively on CCUS. • As a result, old and new players are now establishing joint ventures in a CCUS hub configuration. • New business models also create new project complexities. There is a greater need for coordination across the value chain, mitigation of counter-party risks, allocation of long-term liability, and management of shared, cross-border CO2 transport and storage infrastructure. • Governments can support the deployment of these new models and step in where challenges remain. This, of course, requires the governments to understand better the way the CCUS project development landscape is progressing. Last but not least, a visual that compares the CCS cost and the EU carbon price. There's that evergreen question of what the carbon price should be to incentivise CCS. The right answer is that a strong carbon price is only one of many elements needed. And it's barely touching the CCS applications from diluted CO2 streams today, as seen below. Link to the report in the comments.

Explore categories