Badly Trained AI

  • Jeff Moden - Thursday, January 10, 2019 9:31 AM

    xsevensinzx - Thursday, January 10, 2019 8:35 AM

    Jeff Moden - Thursday, January 10, 2019 8:19 AM

    Heh... AI... machine learning... When it comes to advertising on the net, someone is hitting the crack pipe pretty hard.  I leased a new truck a year ago and immediately got slammed with advertising and emails try to entice me to... buy a new truck. I'm still getting that kind of advertising although at a slightly less volume.  It's just stupid and incredible waste of advertising dollars.  I know how that advertising is paid for because I used to work for a company that did "Double-click.net" processing for "spotlight pixels" in every graphic that showed up.  Huge waste and stupid waste of money the way I've been hit.  No wonder why stuff costs so much nowadays.

    I work in advertising myself. Got to remember that when you think about all of the people in the world or all of the people in your country, you are marked as someone you MAY have an interest to buy a car compared to the hundreds of millions or billions around the world where they have no idea if they can either drive. So, while you may think it's a waste, it's still a pretty big lead on a potential sale for someone from a statistical standpoint.

    But you know, the leads can go cold! I guess that what separates the good advertisers from the bad ones.

    My point is that the AI/Machine learning cited me as a potential buyer AFTER I "bought" a vehicle without understanding that I just "bought" a vehicle and I'm not going to be in the market for at least another 36 months when the lease runs out.  That's means that the failed miserably in identifying the nature of the source of data they're using.  It was never a "lead" because it arrived already frozen.  It's just stupid.

    Sometimes useful for some though. After I've purchased something I still like to check the prices just to be sure I wasn't ripped off.
    I suppose there is no way the computer could know whether you are still looking or had made the purchase.

  • xsevensinzx - Thursday, January 10, 2019 8:35 AM

    I work in advertising myself. Got to remember that when you think about all of the people in the world or all of the people in your country, you are marked as someone you MAY have an interest to buy a car compared to the hundreds of millions or billions around the world where they have no idea if they can either drive. So, while you may think it's a waste, it's still a pretty big lead on a potential sale for someone from a statistical standpoint.

    But you know, the leads can go cold! I guess that what separates the good advertisers from the bad ones.

    I used to work in advertising specialising in direct mail.  We'd choose the top 5% to 10% of a mailing list based on a propensity to buy score, get a phenomenal uplift in response rate (as you'd expect) and then have the client insist on mailing the rest of the list as our selection had worked so well.

    We experimented with true personalisation on a customer base of 5 million but found that behaviour could be predicted by categorising those customers into 45 clusters with no real benefit in going below that level of granularity.  Again, people were strangely resistant to solutions that were simple and cheap.

  • Jonathan AC Roberts - Thursday, January 10, 2019 7:55 AM

    One thing I don't understand about biased AI systems is that how can AI system can be biased if you don't give it the information that could make it biased? For example, if you don't give the algorithm the sex and race of the person how could it come to a biased conclusion about the person?

    The AI finds patterns that you don't realize. For example, you don't say "rate a resume with "woman" in it lower. What you do is say these resume are 1 star, these 2, etc. The system isn't programmed, but starts to sift through resumes, trying to find a pattern that says why something is 1 star, 2 star, 4 star, etc. Eventually it notices that all the resumes that are very similar in verbiage, but have a different rating, have "woman" in them.

    A case where the prejudice of the tagged data becomes more apparent when a system learns from what humans have rated.

  • Jeff Moden - Thursday, January 10, 2019 8:19 AM

    Heh... AI... machine learning... When it comes to advertising on the net, someone is hitting the crack pipe pretty hard.  I leased a new truck a year ago and immediately got slammed with advertising and emails try to entice me to... buy a new truck. I'm still getting that kind of advertising although at a slightly less volume.  It's just stupid and incredible waste of advertising dollars.  I know how that advertising is paid for because I used to work for a company that did "Double-click.net" processing for "spotlight pixels" in every graphic that showed up.  Huge waste and stupid waste of money the way I've been hit.  No wonder why stuff costs so much nowadays.

    The same happened to me but its completely reasonable to expect this in my case, I was most certainly tracked by searches pertaining to vehicles as I was shopping around. The actual purchase I would imagine should NOT show up in the same datasets as it should instead be limited to the companies that were actually involved in the purchase. Given that the purchase of a new vehicle is significant enough that folks might put effort over a period of time evaluating vehicles, it makes sense that ad folks who see these analytics would roll the dice and hope to hit me up as its statistically likely that I have not immediately purchased a vehicle within such a narrow timeframe relative to initiating my web searches regarding this purchase.

  • Steve Jones - SSC Editor - Thursday, January 10, 2019 10:00 AM

    Jonathan AC Roberts - Thursday, January 10, 2019 7:55 AM

    One thing I don't understand about biased AI systems is that how can AI system can be biased if you don't give it the information that could make it biased? For example, if you don't give the algorithm the sex and race of the person how could it come to a biased conclusion about the person?

    The AI finds patterns that you don't realize. For example, you don't say "rate a resume with "woman" in it lower. What you do is say these resume are 1 star, these 2, etc. The system isn't programmed, but starts to sift through resumes, trying to find a pattern that says why something is 1 star, 2 star, 4 star, etc. Eventually it notices that all the resumes that are very similar in verbiage, but have a different rating, have "woman" in them.

    A case where the prejudice of the tagged data becomes more apparent when a system learns from what humans have rated.

    There was an article about if someone entered an Asian name (Muhammad Khan) for an insurance quote it came to a higher amount than if they entered a British name. My thoughts on this were just don't give the AI algorithm the person's name then it can't make a decision based on the name.
    https://talentorganizationblog.accenture.com/financialservices/are-insurers-raising-their-ai-right-can-human-machine-collaboration-mitigate-risks
    https://www.bbc.co.uk/news/uk-wales-42795981

  • Jonathan AC Roberts - Thursday, January 10, 2019 9:35 AM

    Jeff Moden - Thursday, January 10, 2019 9:31 AM

    My point is that the AI/Machine learning cited me as a potential buyer AFTER I "bought" a vehicle without understanding that I just "bought" a vehicle and I'm not going to be in the market for at least another 36 months when the lease runs out.  That's means that the failed miserably in identifying the nature of the source of data they're using.  It was never a "lead" because it arrived already frozen.  It's just stupid.

    Sometimes useful for some though. After I've purchased something I still like to check the prices just to be sure I wasn't ripped off.
    I suppose there is no way the computer could know whether you are still looking or had made the purchase.

    Ok, add to this 'feature' of not having a complete understanding of the data history and the implications, throw in the idea that there are bound to be errors in the accumulated data that by their very nature will never be fixed, but will likely continue to be used as the basis for 'intelligence'.   The whole concept of AI may have more risk than can be justified.  Bad information is worse that no information at all, but we all know how hard it is to get rid of bad data.

    Rick

    One of the best days of my IT career was the day I told my boss if the problem was so simple he should go fix it himself.

  • Steve Jones - SSC Editor - Thursday, January 10, 2019 10:00 AM

    A case where the prejudice of the tagged data becomes more apparent when a system learns from what humans have rated.

    One case that hits all of us is in insurance risk calculations.  Even though you have not had a claim in years or decades, your premium is going to be based on claim history in your area, your age group, etc.  I relocated from a rural area to a metropolis, and my premium increased five times in the first six renewals, even though there still have been no claims, because I am now perceived as being a greater risk due to location instead of events.  The bias is evident in the selection of criteria on which decisions are made. 

    Obviously this can also be a good thing, for instance, when you doctor asks you about any family history of heart attack, stroke, cancer. But do you want your health care decisions based on how well you distant cousins pay their medical bills?

    Rick

    One of the best days of my IT career was the day I told my boss if the problem was so simple he should go fix it himself.

  • To me it is sad that they scrapped the project when it showed the inherent bias against women. Instead they should have dealt with the bias that is excluding women and probably has nothing to do with who is a good candidate anyway.

    A prime example of such bias factors is employment gaps. This will affect women and older people more. The reasons are different, but the affect is the same. Women often take breaks for children. Older workers find replacing a lost job harder. and now we have swept the problem under the carpet. Perhaps the idea that employment gaps should be considered is false. I can say that the only explanation for consideration has been rooted in the idea that since others didn't hire them something was wrong with them. And that simply doesn't follow. (Or perhaps it does since if you are bigoted since the reasoning is following bigotry.)

  • I think people lose sight of the fact that AI/ML are just a form of power tool. You wouldn't damn an electric drill for drilling through a pipe.
    One evening in A&E will reveal the folly of mankind when supplied with tools that they don't know how to use and for which the won't read the instructions

  • Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day


    https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist

    AI chatbots are simply repeating phrases verbatim, triggered by the presence of a keyword linked by proximity with the original canned response. We can't transpose attributes like "racist" or "intelligent" onto an AI any more than we could upon a parrot.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • skeleton567 - Thursday, January 10, 2019 11:00 AM

    One case that hits all of us is in insurance risk calculations.  Even though you have not had a claim in years or decades, your premium is going to be based on claim history in your area, your age group, etc.  I relocated from a rural area to a metropolis, and my premium increased five times in the first six renewals, even though there still have been no claims, because I am now perceived as being a greater risk due to location instead of events.  The bias is evident in the selection of criteria on which decisions are made. 

    Obviously this can also be a good thing, for instance, when you doctor asks you about any family history of heart attack, stroke, cancer. But do you want your health care decisions based on how well you distant cousins pay their medical bills?

    The chances are significantly greater in a metropolitan area that another driver will rear end your car and then either drive away or file a bogus lawsuit, leaving YOUR insurance company to pay the bill for an accident that was never your fault in the first place. You've simply moved into a more expensive risk group in general.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • "Gender bias was not the only issue. Problems with the data that underpinned the models’ judgments meant that unqualified candidates were often recommended for all manner of jobs, the people said. With the technology returning results almost at random, Amazon shut down the project, they said."

    The tech was returning "random results" but they shut it down because of "gender bias" .... 

    😉

  • Eric M Russell - Thursday, January 10, 2019 12:53 PM

    Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day


    https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist

    AI chatbots are simply repeating phrases verbatim, triggered by the presence of a keyword linked by proximity with the original canned response. We can't transpose attributes like "racist" or "intelligent" onto an AI any more than we could upon a parrot.

    You beat me to it.  It's unfortunately similar to my reaction to the Amazon article.  The ML is intended to find patterns in the data:  you usually have to spend time grooming the patterns (which includes which correlations should be used in predictions).  Unfortunately - if you feed in data that shows more men getting job offers than women (never mind that more men applied than women), the ML is going to find that pattern and stupidly reinforce it.

    Unless you spend time ensuring that the data you use to train is balanced (or perhaps - will train the ML or AI engine in the direction you WANT it to go), I am not sure you can "blame" the program for finding spurious correlations that should not be used predictively. This also leads to some interesting ethical concerns:  rebalancing the data (i.e. manipulating a "real life" data set to encourage certain behavior) MIGHT be benign, but can quickly start looking like disinformation, misleading the public or other similar ugly prhases in common culture if you are not careful.  It also shows how easy it would be to manipulate.

    ----------------------------------------------------------------------------------
    Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?

  • Personally, I wouldn't work for an organization that pre-screens candidates based on Credit / BMI / Social Media score or an AI. That doesn't serve the interests of qualified candidates or qualified hiring managers.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Eric M Russell - Thursday, January 10, 2019 2:16 PM

    Personally, I wouldn't work for an organization that pre-screens candidates based on Credit / BMI / Social Media score or an AI. That doesn't serve the interests of qualified candidates or qualified hiring managers.

    I agree with you. However, playing devil's advocate for a moment, I can understand why large companies do use some sort of AI. If you're getting thousands of resumes (CV) a day hiring the personnel staff to go through them by hand becomes very daunting, if not impossible. You've got to do something to try and cut down the mountain of resumes to be reviewed.

    Kindest Regards, Rod Connect with me on LinkedIn.

Viewing 15 posts - 16 through 30 (of 55 total)

You must be logged in to reply to this topic. Login to reply