000 01935nam a2200253 4500
005 20251118122902.0
008 20251118b 001 0 eng
020 _a9781786494337
_qpaperback
040 _beng
_cKPN
_erda
082 0 0 _a174.90063 CHR 2020
_223
100 1 _aChristian, Brian,
_d1984-
_eauthor.
245 1 4 _aThe alignment problem :
_bmachine learning and human values /
_cBrian Christian.
250 _aFirst edition.
264 1 _aNew York, NY :
_bW.W. Norton & Company,
_c2020
300 _axii, 476 pages ;
_c25 cm
504 _aIncludes bibliographical references (pages [401]-451) and index.
520 _a"A jaw-dropping exploration of everything that goes wrong when we build AI systems-and the movement to fix them. Today's "machine-learning" systems, trained by data, are so effective that we've invited them to see and hear for us-and to make decisions on our behalf. But alarm bells are ringing. Systems cull résumés until, years later, we discover that they have inherent gender biases. Algorithms decide bail and parole-and appear to assess black and white defendants differently. We can no longer assume that our mortgage application, or even our medical tests, will be seen by human eyes. And autonomous vehicles on our streets can injure or kill. When systems we attempt to teach will not, in the end, do what we want or what we expect, ethical and potentially existential risks emerge. Researchers call this the alignment problem. In best-selling author Brian Christian's riveting account, we meet the alignment problem's "first-responders," and learn their ambitious plan to solve it before our hands are completely off the wheel"--
_cProvided by publisher.
650 0 _aArtificial intelligence
_xMoral and ethical aspects.
650 0 _aArtificial intelligence
_xSocial aspects.
650 0 _aMachine learning
_xSafety measures.
650 0 _aSoftware failures.
650 0 _aSocial values.
942 _2ddc
_c1
_n0
999 _c1676
_d1676