The program director at my workplace has just discovered AI and is very excited about it. He wants me to lead a team seeing how it could be used for our organization (engineering firm). I told him that there are definitely advantages that we could gain, but one of the biggest risks is if you offload the majority of writing tasks to AI over many years, then you're going to end up with a generation (and workforce) that doesn't know how to communicate via writing. And I personally feel that on aggregate, most people already aren't great at writing.
Edit: Case in point, I made a typo in the original post. Fixed!
I read something in a legal newsletter rather a while back, containing a warning to check for oneself, when using chat-bots for legal research purposes; apparently, there have been cases of ChatBots producing what appeared to be plausible examples of case-law - that proved to be cases that had never actually existed. (Rather embarrassing, to say the least, when using fictitious case-law to make a point in Court, before an alert Judge.)
Computerised machines made to be obliging cannot reason and have no contact with/conception of reality; they 'see' pixels, not RL. The last thing we should be doing is handing responsibility over to systems incapable of understanding.
The program director at my workplace has just discovered AI and is very excited about it. He wants me to lead a team seeing how it could be used for our organization (engineering firm). I told him that there are definitely advantages that we could gain, but one of the biggest risks is if you offload the majority of writing tasks to AI over many years, then you're going to end up with a generation (and workforce) that doesn't know how to communicate via writing. And I personally feel that on aggregate, most people already aren't great at writing.
Edit: Case in point, I made a typo in the original post. Fixed!
I read something in a legal newsletter rather a while back, containing a warning to check for oneself, when using chat-bots for legal research purposes; apparently, there have been cases of ChatBots producing what appeared to be plausible examples of case-law - that proved to be cases that had never actually existed. (Rather embarrassing, to say the least, when using fictitious case-law to make a point in Court, before an alert Judge.)
Computerised machines made to be obliging cannot reason and have no contact with/conception of reality; they 'see' pixels, not RL. The last thing we should be doing is handing responsibility over to systems incapable of understanding.