I think it’s also worth noting that in addition to the “judgment” part of the split, the LLM is limited as to how it can explain “why” it did something. Example: If you gave an LLM a scenario where it can invest a large sum of money, it can tell you what it thinks you should do with it. When you ask it why that investment thesis is valid and what data it utilizes to make it, it becomes clear that the human version of “why” is different (often vastly) than what we require a human to respond with. Data collection and analysis is, as you eloquently describe, the same. As the part-time data clown in my own group, I use LLM’s to process data. GPT (our internal version of it) will often tell me, “I found a correlation between X and Y, they must be related to Z and W.” When I ask it “why” the wheels tend to fall off. It turns out that unrelated data is grouped together in ways that are meaningless (did you know that a company’s revenue is related to its ultimate valuation?!) or incorrect. Being able to sniff the conclusions of data analysis looks like a human’s job for now.
Eric, I like how inviting you are to learn this transition together. It’s much less threatening than saying “you will all be left behind if you don’t get on the bandwagon now!” and I wish the tech industry has taken this approach first. It makes me want to get involved more than I want to take sides :)
Great post. Agree that folks who will thrive are the ones who will leverage AI to automate their workflows and work on higher order thinking for value creation.
is it or is this more hype BS of for a giant scam? When AI users pay the true cost we can talk again - until then, to me - it’s northing but a scam. LLMs are not intelligent. It’s a guessing machine that is wrong often enough that everything has to be checked.
I think it’s also worth noting that in addition to the “judgment” part of the split, the LLM is limited as to how it can explain “why” it did something. Example: If you gave an LLM a scenario where it can invest a large sum of money, it can tell you what it thinks you should do with it. When you ask it why that investment thesis is valid and what data it utilizes to make it, it becomes clear that the human version of “why” is different (often vastly) than what we require a human to respond with. Data collection and analysis is, as you eloquently describe, the same. As the part-time data clown in my own group, I use LLM’s to process data. GPT (our internal version of it) will often tell me, “I found a correlation between X and Y, they must be related to Z and W.” When I ask it “why” the wheels tend to fall off. It turns out that unrelated data is grouped together in ways that are meaningless (did you know that a company’s revenue is related to its ultimate valuation?!) or incorrect. Being able to sniff the conclusions of data analysis looks like a human’s job for now.
Eric, I like how inviting you are to learn this transition together. It’s much less threatening than saying “you will all be left behind if you don’t get on the bandwagon now!” and I wish the tech industry has taken this approach first. It makes me want to get involved more than I want to take sides :)
Great post. Agree that folks who will thrive are the ones who will leverage AI to automate their workflows and work on higher order thinking for value creation.
is it or is this more hype BS of for a giant scam? When AI users pay the true cost we can talk again - until then, to me - it’s northing but a scam. LLMs are not intelligent. It’s a guessing machine that is wrong often enough that everything has to be checked.