I think along with "astonishing" and "important step", it would be useful, if The Sequence should highlight the complexity and danger of these models. It is not always about the AI hype and the market. For example, this report (from the Meta research team: https://arxiv.org/abs/2205.01068) stated the following: "OPT-175B has a high propensity to generate toxic language and reinforce harmful stereotypes". Who is highlighting these things? Regardless, yes this is an important development.
I think along with "astonishing" and "important step", it would be useful, if The Sequence should highlight the complexity and danger of these models. It is not always about the AI hype and the market. For example, this report (from the Meta research team: https://arxiv.org/abs/2205.01068) stated the following: "OPT-175B has a high propensity to generate toxic language and reinforce harmful stereotypes". Who is highlighting these things? Regardless, yes this is an important development.