AI poses no existential threat to humanity

  • > This means they remain inherently controllable, predictable and safe.

    I have no major problem with the overall tone of the article which says "AGI is not going to happen from this" and sheets blame, such as it exists, to the intent of the owner/user. That keeps the human in the loop.

    But they do not remain "predictable and safe" if they continue to repeat bad input, or are used a-socially to justify shit policy outcomes.

  • The real threat is people expecting real intelligence to somehow "emerge" from statistically generated results.