I. Almeida
Goodreads Author
Born
Setubal, Portugal
Website
Genre
Member Since
September 2023
More books by I. Almeida…
“The lack of transparency regarding training data sources and the methods used can be problematic. For example, algorithmic filtering of training data can skew representations in subtle ways. Attempts to remove overt toxicity by keyword filtering can disproportionately exclude positive portrayals of marginalized groups. Responsible data curation requires first acknowledging and then addressing these complex tradeoffs through input from impacted communities.”
― Introduction to Large Language Models for Business Leaders: Responsible AI Strategy Beyond Fear and Hype
― Introduction to Large Language Models for Business Leaders: Responsible AI Strategy Beyond Fear and Hype
“For businesses, it is vital to embed ethical checkpoints in workflows, allowing models to be stopped if unacceptable risks emerge. The apparent ease of building capable LLMs with existing foundations can mask serious robustness gaps. However unrealistic the scenario may seem under pressure, responsible LLM work requires pragmatic commitments to stop if red lines are crossed during risk assessment.”
― Introduction to Large Language Models for Business Leaders: Responsible AI Strategy Beyond Fear and Hype
― Introduction to Large Language Models for Business Leaders: Responsible AI Strategy Beyond Fear and Hype
“Many presume that integrating more advanced automation will directly translate into productivity gains. But research reveals that lower-performing algorithms often elicit greater human effort and diligence. When automation makes obvious mistakes, people stay attentive to compensate. Yet flawless performance prompts blind reliance, causing costly disengagement. Workers overly dependent on accurate automation sleepwalk through responsibilities rather than apply their own judgment.”
― Introduction to Large Language Models for Business Leaders: Responsible AI Strategy Beyond Fear and Hype
― Introduction to Large Language Models for Business Leaders: Responsible AI Strategy Beyond Fear and Hype
Topics Mentioning This Author
topics | posts | views | last activity | |
---|---|---|---|---|
Goodreads Librari...: Merging author pages | 2 | 5 | Jan 16, 2024 04:25AM | |
Goodreads Librari...: [DONE] Please combine editions | 2 | 2 | Aug 21, 2025 05:39AM |
“As LLMs burgeon and permeate diverse sectors, the mandate for transparency, facilitated by all-encompassing documentation, becomes even more pressing.”
― Introduction to Large Language Models for Business Leaders: Responsible AI Strategy Beyond Fear and Hype
― Introduction to Large Language Models for Business Leaders: Responsible AI Strategy Beyond Fear and Hype
“LLMs represent some of the most promising yet ethically fraught technologies ever conceived. Their development plots a razor’s edge between utopian and dystopian potentials depending on our choices moving forward.”
― Introduction to Large Language Models for Business Leaders: Responsible AI Strategy Beyond Fear and Hype
― Introduction to Large Language Models for Business Leaders: Responsible AI Strategy Beyond Fear and Hype
“Automation promises to execute certain tasks with superhuman speed and precision. But its brittle limitations reveal themselves when the unexpected arises. Studies consistently show that, as overseers, humans make for fickle partners to algorithms. Charged with monitoring for rare failures, boredom and passivity render human supervision unreliable.”
― Introduction to Large Language Models for Business Leaders: Responsible AI Strategy Beyond Fear and Hype
― Introduction to Large Language Models for Business Leaders: Responsible AI Strategy Beyond Fear and Hype
“Many presume that integrating more advanced automation will directly translate into productivity gains. But research reveals that lower-performing algorithms often elicit greater human effort and diligence. When automation makes obvious mistakes, people stay attentive to compensate. Yet flawless performance prompts blind reliance, causing costly disengagement. Workers overly dependent on accurate automation sleepwalk through responsibilities rather than apply their own judgment.”
― Introduction to Large Language Models for Business Leaders: Responsible AI Strategy Beyond Fear and Hype
― Introduction to Large Language Models for Business Leaders: Responsible AI Strategy Beyond Fear and Hype
“The lack of transparency regarding training data sources and the methods used can be problematic. For example, algorithmic filtering of training data can skew representations in subtle ways. Attempts to remove overt toxicity by keyword filtering can disproportionately exclude positive portrayals of marginalized groups. Responsible data curation requires first acknowledging and then addressing these complex tradeoffs through input from impacted communities.”
― Introduction to Large Language Models for Business Leaders: Responsible AI Strategy Beyond Fear and Hype
― Introduction to Large Language Models for Business Leaders: Responsible AI Strategy Beyond Fear and Hype