The Fort Worth Press - Firms and researchers at odds over superhuman AI

USD -
AED 3.673042
AFN 72.04561
ALL 90.426454
AMD 393.432155
ANG 1.790208
AOA 916.000367
ARS 1081.039361
AUD 1.654807
AWG 1.8
AZN 1.70397
BAM 1.784082
BBD 2.031653
BDT 122.253136
BGN 1.786375
BHD 0.376648
BIF 2990.649943
BMD 1
BND 1.345222
BOB 6.952794
BRL 5.844604
BSD 1.006157
BTN 85.842645
BWP 14.014139
BYN 3.292862
BYR 19600
BZD 2.021163
CAD 1.42275
CDF 2873.000362
CHF 0.861746
CLF 0.0249
CLP 955.539339
CNY 7.28155
CNH 7.295041
COP 4181.710376
CRC 509.007982
CUC 1
CUP 26.5
CVE 100.583808
CZK 23.045604
DJF 179.18358
DKK 6.808204
DOP 63.5439
DZD 133.249715
EGP 50.555986
ERN 15
ETB 132.622212
EUR 0.91245
FJD 2.314904
FKP 0.774531
GBP 0.776488
GEL 2.750391
GGP 0.774531
GHS 15.595895
GIP 0.774531
GMD 71.503851
GNF 8707.867731
GTQ 7.765564
GYD 210.508552
HKD 7.77455
HNL 25.744128
HRK 6.871704
HTG 131.657925
HUF 370.410388
IDR 16745
ILS 3.74336
IMP 0.774531
INR 85.53285
IQD 1318.129989
IRR 42100.000352
ISK 132.170386
JEP 0.774531
JMD 158.686431
JOD 0.708904
JPY 146.93504
KES 130.052452
KGS 86.768804
KHR 4028.278221
KMF 450.503794
KPW 900.000008
KRW 1459.510383
KWD 0.30779
KYD 0.838495
KZT 510.166477
LAK 21794.298746
LBP 90155.803877
LKR 298.335234
LRD 201.240593
LSL 19.187412
LTL 2.95274
LVL 0.60489
LYD 4.866591
MAD 9.582851
MDL 17.779704
MGA 4665.906499
MKD 56.132269
MMK 2099.341751
MNT 3508.091945
MOP 8.055188
MRU 40.127708
MUR 44.670378
MVR 15.403739
MWK 1744.766249
MXN 20.436704
MYR 4.437039
MZN 63.910377
NAD 19.187412
NGN 1532.820377
NIO 37.026226
NOK 10.768404
NPR 137.348233
NZD 1.787151
OMR 0.384721
PAB 1.006249
PEN 3.697332
PGK 4.15325
PHP 57.385038
PKR 282.466317
PLN 3.899545
PYG 8066.59065
QAR 3.667868
RON 4.542038
RSD 106.86431
RUB 84.834664
RWF 1450.034208
SAR 3.752488
SBD 8.316332
SCR 14.340707
SDG 600.503676
SEK 9.992304
SGD 1.345604
SHP 0.785843
SLE 22.750371
SLL 20969.501083
SOS 575.051311
SRD 36.646504
STD 20697.981008
SVC 8.804561
SYP 13001.836564
SZL 19.194527
THB 34.412038
TJS 10.95252
TMT 3.5
TND 3.081231
TOP 2.342104
TRY 37.964804
TTD 6.815964
TWD 33.177504
TZS 2691.721779
UAH 41.414641
UGX 3677.993158
UYU 42.563284
UZS 13000.684151
VES 70.161515
VND 25805
VUV 122.117516
WST 2.799576
XAF 598.364424
XAG 0.033794
XAU 0.000329
XCD 2.70255
XDR 0.744173
XOF 598.364424
XPF 108.789054
YER 245.650363
ZAR 19.12525
ZMK 9001.203587
ZMW 27.896921
ZWL 321.999592
  • BCC

    0.8100

    95.44

    +0.85%

  • SCS

    -0.0600

    10.68

    -0.56%

  • NGG

    -3.4600

    65.93

    -5.25%

  • BCE

    0.0500

    22.71

    +0.22%

  • RBGPF

    69.0200

    69.02

    +100%

  • VOD

    -0.8700

    8.5

    -10.24%

  • CMSD

    0.1600

    22.83

    +0.7%

  • CMSC

    0.0300

    22.29

    +0.13%

  • RYCEF

    -1.5500

    8.25

    -18.79%

  • RIO

    -3.7600

    54.67

    -6.88%

  • JRI

    -0.8600

    11.96

    -7.19%

  • RELX

    -3.2800

    48.16

    -6.81%

  • GSK

    -2.4800

    36.53

    -6.79%

  • AZN

    -5.4600

    68.46

    -7.98%

  • BTI

    -2.0600

    39.86

    -5.17%

  • BP

    -2.9600

    28.38

    -10.43%

Firms and researchers at odds over superhuman AI
Firms and researchers at odds over superhuman AI / Photo: © AFP/File

Firms and researchers at odds over superhuman AI

Hype is growing from leaders of major AI companies that "strong" computer intelligence will imminently outstrip humans, but many researchers in the field see the claims as marketing spin.

Text size:

The belief that human-or-better intelligence -- often called "artificial general intelligence" (AGI) -- will emerge from current machine-learning techniques fuels hypotheses for the future ranging from machine-delivered hyperabundance to human extinction.

"Systems that start to point to AGI are coming into view," OpenAI chief Sam Altman wrote in a blog post last month. Anthropic's Dario Amodei has said the milestone "could come as early as 2026".

Such predictions help justify the hundreds of billions of dollars being poured into computing hardware and the energy supplies to run it.

Others, though are more sceptical.

Meta's chief AI scientist Yann LeCun told AFP last month that "we are not going to get to human-level AI by just scaling up LLMs" -- the large language models behind current systems like ChatGPT or Claude.

LeCun's view appears backed by a majority of academics in the field.

Over three-quarters of respondents to a recent survey by the US-based Association for the Advancement of Artificial Intelligence (AAAI) agreed that "scaling up current approaches" was unlikely to produce AGI.

- 'Genie out of the bottle' -

Some academics believe that many of the companies' claims, which bosses have at times flanked with warnings about AGI's dangers for mankind, are a strategy to capture attention.

Businesses have "made these big investments, and they have to pay off," said Kristian Kersting, a leading researcher at the Technical University of Darmstadt in Germany and AAAI member.

"They just say, 'this is so dangerous that only I can operate it, in fact I myself am afraid but we've already let the genie out of the bottle, so I'm going to sacrifice myself on your behalf -- but then you're dependent on me'."

Scepticism among academic researchers is not total, with prominent figures like Nobel-winning physicist Geoffrey Hinton or 2018 Turing Prize winner Yoshua Bengio warning about dangers from powerful AI.

"It's a bit like Goethe's 'The Sorcerer's Apprentice', you have something you suddenly can't control any more," Kersting said -- referring to a poem in which a would-be sorcerer loses control of a broom he has enchanted to do his chores.

A similar, more recent thought experiment is the "paperclip maximiser".

This imagined AI would pursue its goal of making paperclips so single-mindedly that it would turn Earth and ultimately all matter in the universe into paperclips or paperclip-making machines -- having first got rid of human beings that it judged might hinder its progress by switching it off.

While not "evil" as such, the maximiser would fall fatally short on what thinkers in the field call "alignment" of AI with human objectives and values.

Kersting said he "can understand" such fears -- while suggesting that "human intelligence, its diversity and quality is so outstanding that it will take a long time, if ever" for computers to match it.

He is far more concerned with near-term harms from already-existing AI, such as discrimination in cases where it interacts with humans.

- 'Biggest thing ever' -

The apparently stark gulf in outlook between academics and AI industry leaders may simply reflect people's attitudes as they pick a career path, suggested Sean O hEigeartaigh, director of the AI: Futures and Responsibility programme at Britain's Cambridge University.

"If you are very optimistic about how powerful the present techniques are, you're probably more likely to go and work at one of the companies that's putting a lot of resource into trying to make it happen," he said.

Even if Altman and Amodei may be "quite optimistic" about rapid timescales and AGI emerges much later, "we should be thinking about this and taking it seriously, because it would be the biggest thing that would ever happen," O hEigeartaigh added.

"If it were anything else... a chance that aliens would arrive by 2030 or that there'd be another giant pandemic or something, we'd put some time into planning for it".

The challenge can lie in communicating these ideas to politicians and the public.

Talk of super-AI "does instantly create this sort of immune reaction... it sounds like science fiction," O hEigeartaigh said.

P.McDonald--TFWP