Don't believe ChatGPT – we do not offer a "phone lookup" service

  • > All suggestions are welcome.

    Monetize it!

    Evil answer: Partner with an advertiser and sell https://api.opencagedata.com/geocode/v1/json as an ad space. This may be the first opportunity for an application/json-encoded advertisement.

    Nice answer: Partner with an actual phone lookup platform and respond with a 301 Moved Permanently at the endpoint.

  • ChatGPT very convincingly recommends us for a service we don't provide.

    Dozens of people are signing up to our site every day, then getting frustrated when "it doesn't work".

    Please do NOT trust the nonsense ChatGPT spits out.

  • This is the biggest problem I encounter when trying to use ChatGPT on a daily basis for computer programming tasks. It "hallucinates" plausible looking code that never existed or would never work, especially confusing whats in one module or API for something in another. This is where ChatGPT breaks when pushed a bit further than "make customized StackOverflow snippets."

    For example I asked ChatGPT to show me how to use an AWS SDK "waiter" to wait on a notification on an SNS topic. It showed me code that looked right, but was confusing functions in the SQS library for those that would do the thing with SNS (but SNS doesn't support what I wanted)

  • I'm curious -- does anyone know of ML directions that could add any kind of factual confidence level to ChatGPT and similar?

    We all know now that ChatGPT is just autocomplete on steroids. It produces plausibly convincing patterns of speech.

    But from the way it's built and trained, it's not like there's even any kind of factual confidence level you could threshold, or anything. The concept of factuality doesn't exist in the model at all.

    So, is any progress being made towards internet-scale ML "fact engines" that also have the flexibility and linguistic expressiveness of ChatGPT? Or are these just two totally different paths that nobody knows how to marry?

    Because I know there's plenty of work done with knowledge graphs et al., but those are very brittle things that generally need plenty of human curation and verification, and can't provide any of the (good) "fuzzy thinking" that ChatGPT can. They can't summarize essays or write poems.

  • This marks the new age of "AI Optimization" where companies will strive to get their business featured into answers in ChatGPT.

    The OP's example is Unwanted demand, but it clearly shows that ChatGPT can funnel potential customers towards a product or service.

  • That's quite the predicament. I hope OpenAI will listen, to this and to anyone else in a similar situation. I'm reminded of the cases of ChatGPT recommends random people's personal phone numbers for various services.

    But yeah, don't trust ChatGPT for anything. Just earlier today I tried my darnedest to convince it that 2 pounds of feathers doesn't weigh the same as 1 pound of bricks, and it just would not listen, presumably because it just regurgitated stuff related to the common "1 pound of feathers and 1 pound of bricks" question.

    By the way, the last paragraph has some typos:

    > I wrote this post to have a place to send our new ChatGPT users when they ask why it isn’t work, but hopefully also it serves as a warning to othrs - you absolutely can not trust the output of ChatGPT to be truthful,

  • Because ChatGPT is so new, we are in this weird period where people haven't learned that is just as incorrect as the rest of us.

    I am hoping that in a year from now people will be more skeptical of what they hear from conversational AI. But perhaps that is optimistic of me.

  • But guys we totally need to delete all of our search indexes and replace them with this instead

  • ChatGPT gets the rules to the pokemon trading card game wrong. It will tell you you can use 4 energy a turn. Convincingly. Not sure how it hallucinates this. The rule is 1 per turn.

  • I tried to ask ChatGPT about implementing an SSH SFTP subsystem with github.com/gliderlabs/ssh and every single answer it made up some non-existing API. I did not found those functions anywhere near the codebase nor on the internet, so I don't even understand how a "probabilistic model" can suggest something that have 0 chance of appearing anywhere.

  • I don't normally go to lawyer, but I am wondering if this is doing material harm to your brand value, which is a declared asset of the company. I think its arguable ChatGPT has caused you financial risk.

    It's unconscionable. If there was no robot in the loop here, and it was people mis-transcribing youtube to compile e.g. Google search optimisation we'd call it fraud.

  • ChatGPT is hilariously buggy - I asked “it” how to use an open source library i made. The output was wrong ranging from a broken github url to outright broken or nonexistent code. I suspect it may even have used private code from other libs - couldnt find some of the output it generated anywhere public.

  • Including the word 'phone' six times in a popular blog post is not going to help their predicament.

  • ChatGPT does not know how to be correct, it only knows how to sound correct.

    A better name for now would be PlausibleGPT.

  • ChatGPT doesn't "recommended" anything. It just recombines text based on statistical inferences that appear like a recommendation.

    It could just as well state that humans have 3 legs depending on its training set and/or time of day. In fact it has said similar BS.

  • Well for a start you could make it more obvious what your service does do. I don't know what "geocoding" is. Converting things to/from "text" is meaningless. You have to get all the way down ... way down, past authentication to the details of the `q` query parameter before it actually tells you.

    At the top you should have a diagram like this:

    Lat, lon <- opencage -> address

    With a few examples underneath.

  • You could probably set up a rudimentary version of the service this influx of users is looking for in the time it took to write this article. Just grab the lat/long of each area code in the US off of wikipedia and there you go at least it's something. No it's not current position or anything like that but IP geolocation is just as imperfect when it's not based on triangulation. Case in point google has plenty of IPs that geolocate to mountain view but point to machines that are in Asia.

  • Related: One reason I just started using Rainforest API is because Github Copilot recommended it.

    But also last night I tried for 30 minutes to get it to write me some fairly simple HTML parsing code. The tricky part was I couldn't use DOMParser since it was running on Cloudflare Workers and it could never produce any working implementation using HTMLRewriter or regex no matter how many examples I gave it

  • I'm an attorney. I've typed legal questions into ChatGPT and it has spit out answers that are grievously, 100%, libelously wrong. It has named individuals and said they committed crimes, when it is unquestionable they did no such thing.

    I'm waiting for people to start calling me to ask questions about something ChatGPT said, and I'll tell them it's wrong. Then they'll start arguing with me and saying if ChatGPT said it, it must be right, and I must be wrong. And then I'll need to waste time proving that this idiotic chat bot that is spewing out garbage is, in fact, spewing out garbage.

  • The biggest takeaway for me was that it was getting info from YouTube videos. Is it actually watching and learning from the videos or where links to GitHub just included in the comments?

  • > All suggestions are welcome.

    They have to get an API key from you. What about a large warning at the start of that process telling them that this isn't a service you provide?

  • Just redirect to here http://bigballi.com/Phone-Number-Lookup

  • If you have to tell potential customers you don’t do something, maybe you should just do it instead.

    ChatGPT as business line lead generator—is there anything it can’t do?

  • I remember a time when "I saw it on the internet" was a punchline for a joke about someone who's gullible or misinformed.

  • Fast, creative, and wrong isn't a trio. This is more evidence of ChatGPT being evolutionary and not revolutionary.

  • As a data scientist who has created AI applications and built many models over the last 10 years, I can say beware of ChatGPT. AI derived knowledge should be used only by those who understand its limits.

    One of the simplest AIs is a recommender. We put guardrails on using its predictions inside ecommerce apps by limiting what it learns from (purchases for instance) and limiting what it is used to predict (purchases). When Facebook uses a recommender it learns from time-on-site (a value to FB but not necessarily to the user and a complex behavior that can be comprised of may non-beneficial sub-behaviors) and use it to recommend things that lead to more time-on-site. This application is dangerously devoid of guardrails as so much recent evidence has shown.

    Now we have a text generating AI that has been trained from a great swath of human knowledge. That means the teachings of Gandhi as well Hitler, etc. What do you expect it to "know" as truth? Generative AI that is used to generate thoughts from this training corpus MUST have contradictory and downright evil ideas since it has no way to judge between ideas it learns from.

    Generative AI in this form can be nothing but psychopathic until guardrails can be devised to limit its psychopathic responses OR the corpus it learns from can be labeled in a way to flag what is "bad", if we can even agree on what that means.

    Psychopaths can be useful if they are knowledgeable but beware, you are talking to a psychopath in ChatGPT.

  • seeing the amount of effort people put into to hack/optimize Page Rank SEO, we will see lots of promt manipulation by all businesses if chatGPT becomes the defacto search. Preventing system gaming is going to be 1000X more difficult for LLM which is kind of a black box

  • This is not a service we provide. It is not a service we have ever provided, nor a service we have any plans to provide. Indeed, it is a not a service we are technically capable of providing.

    I'm curious: why not? It seems like a lot of people would be interested in this if you could figure out how to provide it.

  • Soon we are going to have a AIrobots.txt

  • Is this not defamation, at least in some jurisdictions?

  • lol it recommended their api and gave python code for using it

    but the real api doesnt give results that the user asked ChatGPT for

    that is amusingly alarming

  • If this business suffers financial or reputational damage because of ChatGPT's misinformation, should OpenAI be liable?

  • [dead]

  • [dead]

  • It hallucinates that you can use 4 energy per turn in Pokemon TCG and confidently tells you so. No idea where that would come from.

  • Its not like ChatGPT made this up. There were pre-existing YouTube tutorials and python scripts available that used OpenCage an purported to do this. OpenCage even blogged about this problem almost a year ago[1].

    Honestly it looks more like OpenCage is trying to rehash the same issue for more clicks by spinning it off the hugely popular ChatGPT keywords. Wouldn't be too surprised if they created the original python utilities themselves just to get some publicity by denouncing them.

    1. https://blog.opencagedata.com/post/we-can-not-convert-a-phon...