content.dataiku.com
Open in
urlscan Pro
3.232.252.213
Public Scan
Submitted URL: https://pages.dataiku.com/e3t/Ctc/GA+113/cfvmy04/VW9rRb70GrRrW3T9nfv5_6yqNW1YzZHS5kjyFYN65wGTq5nR32W50kH_H6lZ3mDW6H775q8KQ...
Effective URL: https://content.dataiku.com/eu-ai-act-infographic-email?utm_campaign=GLO+CONTENT+DB+Emails+CTO%2FCOI+%26+Data+Execs+2024&utm...
Submission: On August 30 via manual from SA — Scanned from DE
Effective URL: https://content.dataiku.com/eu-ai-act-infographic-email?utm_campaign=GLO+CONTENT+DB+Emails+CTO%2FCOI+%26+Data+Execs+2024&utm...
Submission: On August 30 via manual from SA — Scanned from DE
Form analysis
0 forms found in the DOMText Content
Miniaturansichten Dokumentstruktur Anhänge Ebenen Aktuelles Struktur-Element Zurück Weiter Alle hervorheben Groß-/Kleinschreibung beachten Akzente Ganze Wörter Farbe Größe Farbe Dicke Deckkraft Präsentationsmodus Öffnen Drucken Speichern Aktuelle Ansicht Erste Seite anzeigen Letzte Seite anzeigen Im Uhrzeigersinn drehen Gegen Uhrzeigersinn drehen Textauswahl-Werkzeug Hand-Werkzeug Einzelseitenanordnung Vertikale Seitenanordnung Horizontale Seitenanordnung Kombinierte Seitenanordnung Einzelne Seiten Ungerade + gerade Seite Gerade + ungerade Seite Dokumenteigenschaften… Sidebar umschalten Suchen Zurück Vor von 1 Präsentationsmodus Öffnen Drucken Speichern Aktuelle Ansicht FreeText-Annotation Ink-Annotation Werkzeuge Verkleinern Vergrößern Automatischer Zoom Originalgröße Seitengröße Seitenbreite 50 % 75 % 100 % 125 % 150 % 200 % 300 % 400 % The EU AI Act: What You Need to Know The EU AI Act is principally concerned with protecting EU citizens through new mechanisms that impact the entire AI lifecycle: Article 1: Subject Matter The purpose of this regulation is to improve the functioning of the internal market and promote the uptake of human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health and safety, fundamental rights enshrined in the charter, including democracy, the rule of law, and environmental protection, against the harmful effects of AI systems in the Union and supporting innovation. 2. NEW OBLIGATIONS FOR GENERAL PURPOSE AI GPAI PROVIDERS, CONSEQUENTIAL FOR DOWNSTREAM USERS Two tiers of general-purpose AI models Independent of the tier, all GPAI providers face new documentation duties Providers of GPAI models with systemic risk will be on tighter scrutiny, with responsibilities towards proactive risk mitigation Compared to the GDPR, where higher fines can reach up to 2% of global turnover or $10 million, penalties for infringing the EU AI Act can be up to 3.5x higher, potentially reaching 7% of global turnover or €35 million: 3. PENALTIES FOR NONCOMPLIANCE Of total annual worldwide turnover for preceding financial year Of total annual worldwide turnover for preceding financial year FinesFines EU AI Act penalty GDPR penalty Up to 2% EU AI Act penalty GDPR penalty FinesFines Up to €35 Million Up to €15 Million Up to €15 Million Of total annual worldwide turnover for preceding financial year Of total annual worldwide turnover for preceding financial year Up to 7% Up to 3% Up to 3% UNACCEPTABLE RISK HIGH RISK MINIMAL RISK Up to €35 Million Up to $10 Million Up to 7% 3.5x3.5x 3.5x3.5x What’s Next? Ask Yourself: 1. Do I know all AI systems currently operationalized and in development across my organization? 2. Am I confident about the associated risk levels of those systems? 3. Are there teams in my organization that are getting ready for the EU AI Act? DOWNLOAD YOUR EU AI ACT COMPLIANCE CHECKLISTDOWNLOAD YOUR EU AI ACT COMPLIANCE CHECKLIST HIGH RISK AI systems with serious potential of harm e.g., biometric identification, emotion recognition systems, AI in critical infrastructure, used in worker management Must meet specific pre- and post-marketization requirements (Art. 8-15) UNACCEPTABLE RISK AI systems with untenable potential of harm e.g., systems that predict criminal behavior or expand facial recognition databases via unauthorized methods These cannot be deployed (Art. 5) LIMITED RISK AI systems with some potential of harm e.g., those generating or manipulating audio, video, text, or image content as well as chatbots Disclose AI-generated content and inform users they are interacting with AI systems, with the option for voluntary governance (Art. 50) MINIMAL RISK AI systems with no potential of harm e.g., spam filters and AI-enabled video games No new obligations with the option for voluntary governance (Art. 95) 1. RISK LEVELS FOR AI USE CASES Mehr Informationen Weniger Informationen Schließen Geben Sie zum Öffnen der PDF-Datei deren Passwort ein. Abbrechen OK Dateiname: - Dateigröße: - Titel: - Autor: - Thema: - Stichwörter: - Erstelldatum: - Bearbeitungsdatum: - Anwendung: - PDF erstellt mit: - PDF-Version: - Seitenzahl: - Seitengröße: - Schnelle Webanzeige: - Schließen Dokument wird für Drucken vorbereitet… 0 % Abbrechen Next Next Key Pillars for Achieving EU AI Act Readiness This session, led by Jacob Beswick, Dataiku's AI Governance Lead, will focus on the foundations for achieving EU AI Act readiness. Gain a deeper understanding of your next best actions to achieve EU AI Act Readiness. LinkedIn LinkTwitter LinkFacebook LinkEmail LinkLike Button GET DATAIKU NEWS GET MORE CONTENT FROM DATAIKU! You seem new here, sign up to our newsletter for updates on new content and videos pdf:What You Need to Know About the EU AI Act video:Key Pillars for Achieving EU AI Act Readiness pdf:EU AI Act Readiness Checklist webpage:Dataiku: The AI Platform of Choice for CIOs and IT Leaders webpage:Dataiku Pegged a Leader in the IDC MarketScape for AI Governance Platforms 2023 video:AI Governance with Dataiku GET MORE CONTENT FROM DATAIKU! You seem new here, sign up to our newsletter for updates on new content and videos