Brand Engagement Network, Inc. (BNAI)
NASDAQ: BNAI · Real-Time Price · USD
27.82
-0.45 (-1.59%)
At close: Apr 28, 2026, 4:00 PM EDT
30.50
+2.68 (9.63%)
Pre-market: Apr 29, 2026, 4:04 AM EDT
← View all transcripts

H.C. Wainwright 26th Annual Global Investment Conference 2024

Sep 9, 2024

Anthony Stevens
Analyst, HC Wrainwright

Morning, everyone. Thank you for joining us here at the H.C. Wainwright 26th Annual Global Investment Conference. My name is Anthony Stevens. I'm an analyst on the investment banking team here at HCW, and with that said, I'd like to announce our next featured speaker. We have Paul Chang, who is with the Brand Engagement Network. Paul, take it away.

Paul Chang
CEO, Brand Engagement Network

Yep, thanks very much. It's an intimate group here. How many of you heard of BEN or Brand Engagement Network? Okay, all right, so I'll give you a little background. So the BEN and the kite, really, the name came from Ben Franklin. So he was sort of the inspiration because he was the first Postmaster General, and he figured out how to keep everyone in the U.S. connected by becoming the first Postmaster General. So that was kind of our idea around this company, being able to connect not just consumers, obviously, social media companies do that, but for brands to engage the consumers. So that's where the name Brand Engagement Network came from.

But we are essentially a safe and intelligent AI company, and we've been saying this for, you know, a couple of years now, since we've gone public here. But it seems to now be resonating with other startups that are looking to figure out how to make AI sort of safer and maybe have more control over it. So I'll give you a little bit of an overview, and then I'm gonna share an analogy, and then hopefully things will sink in a little better. So quickly, we're really a conversational AI company. So we started right from the beginning with the ability to have interactive dialogue with the users. So it's not really a chatbot. It's really conversational.

We also have an avatar so that you're actually looking at something, so that, you know, perhaps it's a little bit more personal, a little more engaging. Obviously, we've been in this, safe AI business since 2019. So I don't know if anyone, saw the announcement with, the former founder of, OpenAI, called. I think it's called SSI, right? Safe Superintelligence. So he's now understanding that OpenAI and ChatGPT cannot be controlled, so he's now on to create, sort of a safe, a different AI type of system. But like I said, we've been doing that since, 2019. So the newsworthy part is he just raised $1 billion at a $5 billion valuation. He has no product. He just has 10 people on his team.

So, you know, clearly he has the resume, right? So people are like, "Oh, he's a founder of OpenAI." So clearly they're betting on him and his team, but, you know, their approach can't be terribly different from the approach that we've taken. And that's why I wanted to show this slide, because I want you to look at the patent portfolio. So obviously, we have a bunch of different, you know, AI technologies that we developed around sort of how to make AI safer, right? So it's actually, it's structurally different, how we built our solution versus how OpenAI built their solution. So we have 21 active patents and 26 that are patent pending, and those all revolve around how to make AI safer.

So I'll go into some of the details, but before I do that, I used to live in Maryland for, you know, thirty years. I just recently moved out to Seattle. So I wanted to kind of use this analogy as I was watching these big cruise ships go in and out. So current AI, if you will, doesn't have propulsion or navigation, or control systems built in to a large cruise ship. And Sam Altman said just as much, right? He actually said he doesn't quite know how transformer technology works, and that's the truth, right? No one really knows how that works. So right now, what companies are doing is they have this large, unwieldy beast, and they're trying to help navigate the channels using tugboats here. So that's the analogy that I'm trying to draw.

They're trying to put guardrails to try to have it behave the way you want it to behave, or provide responses the way you want the responses to be. So if you're trying to navigate through a channel, no propulsion, no control, no navigation, how many tugboats do you really need to make it go straight or slow down or dock so you can take on passengers, right? It's not easy. So you'll see people trying to use, you know, these open large language models, and they're constantly adding guardrail after guardrail, and then if it still fails, they just add more guardrail. Because at the end of the day, what they have is a unwieldy beast, which is what you see here. I hope that makes a little bit of sense with the analogy.

So anyway, being in Seattle, I was like, "Okay, this is how I'm gonna talk about sort of the current systems versus our system." Okay, so how is BEN different? So ours is fully controlled and optimized for user experience, and the only way you can do that is if you have the full stack. So I don't know how many of you are, you know, sort of technical, but full stack means you have the front-end layer, which is the, it's the layer that you interface with the user. So here you see that, you know, we're on mobile, laptop, desktop, or, you know, kiosk, or we could even actually stream just verbal messages through car or store audio, right? So that's one. Obviously, various different types of avatars, languages, we can change the voice, the tone, the cadence.

Anything could be customized, as if we were trying to create a entity that you think is really the optimal for your customer engagement. And because we have full stack, we could also incorporate things like, you know, graphics and other images onto our AI application. So, you know, I have it on my phone, so if you guys are interested after this, I'm happy to show you how our system works. Middleware layer is really where you do the integration, because we don't believe AI systems is a standalone part of your business. AI needs to be integrated into your business, and the question is: how do you do that if you don't have this middleware integration layer?

We built an integration layer, not only to provide sort of the engagement with the users, but at the end of the day, what if a user says, "Yeah, that sounds good. I wanna get a COVID vaccine. Can you make an appointment?" Well, the AI system should be able to make that appointment for you, right? But that requires integration to, you know, some system at a pharmacy, right? So you need this middleware layer, and that also needs to be incorporated into this full stack. Most importantly, the back-end layer. So back-end layer, we have our own language model. And it's not as unwieldy as the picture that you saw. It's two orders of magnitude smaller, yet we actually drive comparable results as those large language models, and I'll tell you how we do that.

So when it's smaller, then you have the advantage of performance, speed, cost, concurrency, right? Let's say it's hundred times smaller or three hundred times smaller. If you could get the same answer or same response with a small or large language model, it's just gonna be, you know, cheaper to operate, right? We also use a very proprietary RAG, which is retrieval augmented generation . So the way we do that is, we take the dataset that's been provided to us by the clients or publicly available through, let's say, FDA or CDC, and we use that dataset to provide the response. So we're not so much making up an answer, we're just retrieving an answer from a known quantifiable dataset. Right. Yeah. So I just wanted to kinda highlight, you know, we went public through de-SPAC back in March

Wanted to kinda highlight our recent progress. So, we closed a $4.95 million placement in May, and recently, $5.95 million private placement in August with existing investors, and those placements were done at a premium to market price. Sky, one of our AI assistant, actually participated in our Q2 earnings call that I conducted. And if you... I don't know, the video must exist somewhere or the recording. So if you google, find our earnings call, you will see my interaction with our AI assistants. I believe that might be the first of its kind, right? Where a public company has used the AI assistant actively in the earnings call. But, you know, that's how much confidence we have, because she is not allowed to hallucinate, right? She's not allowed to make up an answer.

She basically we just had her ingest all of our filings, like, you know, 8-K, 10-Q, and then our website information. So if I ask her a question about, you know, who is Brand Engagement, and how's their technology different, you know, she goes through and provides a, you know, fairly, you know, well-thought-out answer. So anyway, you guys could if you wanna, if you're interested, you can check it out at another time. MedAdvisor is a big tech company that supports the pharmacies. They work with 80% of US retail pharmacies. So, you know, think of CVS, Walgreens, Walmart, those guys. So we are actually now working on, you know, vaccine assistant. Our AI will be able to answer any and all questions about vaccines, and the safety and efficacy.

And again, be able to make an appointment if the user says, "Yeah, I'm interested in getting, you know, COVID and the flu vaccine together. You know, how can I make that appointment?" So we are working with them. Weill is a hospital network right up the road, Cornell University Hospital, Medical School, and the hospital combined, and we actually have our full-size kiosk, so it's life-size. And I used that instance of our AI to do our earnings call recording. So you'll see me interacting with Sky, who is just as tall as I am. InterVent and Members Only Health, they're sort of patient services type companies. They've committed to deploying our technology. Vybroo and Farmacia Roma. Vybroo is a audio-only tech company in Mexico.

Farmacia Roma is a large pharmacy chain in Mexico, and they're actually using our technology to deliver audio messages to the shoppers in the pharmacy. So sort of the key points that I wanna just kinda highlight. So the large language model. So if you look at ChatGPT, it's not really an AI system or AI platform, it's really just a large language model. Again, sort of unwieldy, you just... You don't know how to really predict how it's gonna behave, so you have to put guardrails to make sure it doesn't hallucinate or try to not hallucinate, or say anything that's, you know, toxic or inappropriate. So the challenge with that is the unknown training data. Where does ChatGPT get its data? It's the internet.

So if you say, "Hey, you know, can I take a COVID vaccine with the flu vaccine?" It might be getting its answer from Reddit, right? Because they have contract with Reddit, so it could be just someone's opinion. Whereas we would get the same response from CDC and the manufacturers of those drugs. And we actually ingest clinical trial information so that we could reference those trials in providing our response. Guardrails, again, LLMs have to put guardrails all over. And the LLMs are also compelled to give you an answer. So even if it doesn't know the answer, it has to give you an answer, and that's where hallucination comes from, right? Because it's not quite sure if this was the correct answer, but this is the most sort of probable answer. So it's very much probabilistic engine.

And that's where, you know, they hallucinate, and it fails the users, right? The users, especially something like healthcare, you don't want it to guess. It should really know the answer. Like, you know, and if you Google, "Can I take, you know, these two vaccines together?" Again, you're taking chances by doing Dr. Google, right? We don't know whether the answer is gonna be correct or not. So you would likely pick up the phone, call your pharmacist, and ask whether you can actually take those two vaccines at the same time. And we know the pharmacies are, you know, understaffed, overworked, and that's the challenge that they're dealing with. LLMs are not so good at math. I think we all know that. Here's the key fact: They use shared dataset and learnings across all of their customers.

If you're using ChatGPT, and you have some prompts that you're using, "Hey, can you analyze this book for me," right? That book is now available to everyone else who may be interested in something similar. You are essentially contributing to the dataset of ChatGPT. In our case, your environment is your own environment. No one else has access to that, right? No other company is going to get the learnings from your dataset that you've inputted, right? So every instance or deployment of our AI is ring-fenced. It's completely protected just for you. So let's say, you know, Walmart deploys something like this, and then let's say Walgreens does theirs.

There is no cross-talk between the two instances, which means, as a company, if you feel like you fine-tuned your AI platform to provide really good responses, empathetic responses to your consumers, that learning all stays within that instance of the AI deployment, which this is how the companies are gonna adopt, right? They're not gonna be willing to share their learnings with their competitors. So right now, you know, when you're writing, you know, term papers and things like that, it's okay, right? You're getting ideas from other students, and you're sharing your ideas with them. That's fine. But when it comes to businesses, I don't think that's what the companies are going to want their AI system to do. So I think this is the fundamental difference.

You know, obviously, you know, on our side, we have trained and fine-tuned our language model on very specific approved material. We can also run on CPUs versus GPUs, so there aren't too many companies who could make that claim. How can we make that claim? I told you, our large language model is, like, one two-hundredth the size of GPT-4o, which means we don't have to have massive parallel processing to give you an answer in two seconds. We can run sequentially on CPUs and still give you the answer in two seconds.

If we don't know the answer, our AI will say, "I don't have the answer to that." So if I were to show you the demo, and we ask about, you know, vaccines, and then all of a sudden we ask about, "Hey, who won the, you know, football game between Cowboys and, you know, whoever?" It's gonna say, "Yeah, that's not my area of expertise. I can't give you an answer." And that's pretty much like what a pharmacist would do, right? Like, they're not gonna know everything about everything. They're very good at knowing, you know, the drugs, its benefits, its side effects, all of those things, interactions, right? So in this case, if we built a AI system around, you know, vaccines, it is going to be an expert in that only and nothing more. It's a fundamental difference.

Okay, and then, you know, deployment options, we can deploy on community cloud. So if people wanna just test a few things without spending a lot of money and, you know, no infrastructure, we can do community. A lot of companies, you know, once they get more serious, they're gonna want sort of their private cloud so that they can sort of contain all of their systems and dataset within their own environment. And believe it or not, we call this Citadel, but it really means on-prem. So there are lots of customers, like hospitals and pharmacies, that are actually insisting on on-prem deployment because concerns of the private and healthcare data that could be potentially leaked out if they're on cloud. So we knew that this was gonna be one of the requirements coming into, like, the healthcare market that we're in.

So we have a on-prem deployment model. And like I said, we don't need these fancy NVIDIA GPUs. We could run mostly on, you know, servers with CPUs. So I think that was my last slide. Yeah, there we go. So, yep, I have, like, minute thirty if you guys have any questions. Yes?

Jake, [inaudible] . Insurance to provide this medical information better have insurance.

Yes. So, you know, because we are taking data set from publicly validated sources like CDC, so if you went to CDC website, you'll get the same information, but you'd have to search for it. Ours, you would just have a conversation, and you would just ask, "Can I take COVID vaccine and RSV at the same time if I'm a diabetic?" Let's say those are the conditions, right? It's going to give you the proper answer that you could find on CDC websites after a few minutes of searching, right? So, insurance-wise, we believe we're very much like, like WebMD. It. I mean, obviously, the government agencies like FDA, they're trying to figure out, you know, how to categorize AI.

But so far, the demo that I've done for the FDA, they say, "Yeah, you guys fall more into, like, WebMD versus medical device," right? So we're not really giving diagnosis, we're giving product information in a very efficient way. Yes.

Thank you. Next question.

How many, how many times does the AI not answer a question from a range of questions?

So if there's any question about the area of knowledge, it will always give you an answer, if the answer exists within the ingested content. So I mean, it's really when you're off base, you're asking about something totally different, then it's not going to give you an answer, right? And it's not that we couldn't, we just try to limit the knowledge base, rather than be, like, all-knowing type of an AI system. Yes.

Hi, how are you? I think we met already. Just to know, because I've seen a lot of companies that do diagnostic, and is your bot can also be proactive and ask, for example, a question to retrieve some information to give a diagnosis or not yet?

So yeah, I'm not going to use the term diagnosis, because that's very, there's a legal meaning to that. But we are actually able to do outbound questioning of the user to get information. So, for example, if you wanted to make an appointment, we need your name, phone number, I don't know, maybe birthday, things like that, so we can actually do that. The diagnosis part, I think it's not too far away, but it really has to go through the medical, legal hurdle. Not that technically it couldn't be done, right?

Yeah. So you guys are not there yet. So I'm just asking because there's some platform that, for example, "Oh, I have a cough," and then the bot would ask-

That's right

... "Okay, what kind of cough?" And trying to get when that happens, and it retrieve the information to-

Try and process. Do you guys are there yet?

Yes, we could. We could do that, for sure. We're not marketing that yet because of the sort of legal ramification. Our first project was us actually ingesting a medical textbook from Johns Hopkins University, and we were able to do something similar, but that's not what we're marketing today.

Okay, perfect.

Yes.

Is your business model today that you are creating custom systems for companies?

Correct.

They want a custom system that would be related to whatever products they are offering?

Correct.

You're saying you're primarily starting in healthcare?

So we are building custom systems, but we have a platform that allows the custom system to be built with just 10% of the work. So 90% of the platform is already built. So whether you're a financial services company, telco, a retailer, if you say: "I want an AI that does XYZ," we would say, "Okay, we need a data set," right? So if you want someone who understands a lot about vitamins or, you know, the nutrition, we need the data set. We want you to tell us what do you want the dialogue to feel like, right? So that's the dialogue manager. You want it to be informational, empathetic, all of those things. And then, what does your avatar look like? You know, is the person wearing, you know, some outfit? Sure.

So all of that is really 10% of the work. That's why we're able to build custom systems for each company, because bulk of the work's already been done. We're essentially just configuring the AI assistant to have a look and feel for your staff members. That's the goal, right? But the safe layer of AI is already done. We're doing it, right? So that's why when I saw the news about, "Oh, my God, this guy raised, like, $1 billion at $5 billion valuation, and he's just starting out to build a safe system?" So you can see the architecture is quite different, right? If you look at, like, OpenAI, they don't have that three-layer architecture

And, and- you're not aiming to go, like, out to the public in general. This is for companies to have it and for each company to decide how they want to...

Exactly. Exactly. So we're B2B to C, right? We want our customers use our technology to get to their customers. The other thing, even things like, you know, they say, "Oh, what if, you know, the AI learns and, you know, it does some strange things?" Our AI comes out of the factory with a default setting of it shall not learn from previous conversations. Because in a sort of medical, let's say, that example, you want it to give the same answer every time, right? And not have the previous conversation impact what kind of response it gives. But that's a setting... So at some point, if you're doing, I don't know, market research with consumers, some companies will say, "Yeah, we wanted to learn from previous conversation so that it could maybe extract more insight from the consumer." Then you would actually turn on the learning feature.

Ours, out of the gate, we had the learning feature turned off for these type of applications. We just wanted to give the same answer all the time, like a, like a call center rep should, right? It should just be the same answer all the time, regardless how the question is phrased. Yes?

What are the terms of the SEPA line with Yorkville?

Oh, the terms with Yorkville? It's a $50 million SEPA that, you know, we have to be able to fund our business and growth. And yeah, I mean, it's a, it's a three-year contract that we have with them.

Anthony Stevens
Analyst, HC Wrainwright

Any further questions from the audience?

I guess, 'cause what you're selling to these companies is, like, a layer of security, so the data doesn't get leaked. What cybersecurity partners are you working with?

Paul Chang
CEO, Brand Engagement Network

Oh, so we do have cybersecurity partners. A-LIGN is one of those. Mission... I'm sorry? Yeah, CrowdStrike, Mission, Elevate. So we do. I mean, obviously, cybersecurity is super important for us. So we are working with several companies. We are also, and I kind of skipped over, we are HIPAA compliant and SOC 2 Type 1 certified, right? So we take sort of the data security, AI security very, very serious. And that's why, you know, when people talk about safe, we've been talking safe, right? And I'm hoping our voice will now be heard because some of the other big names are now starting to talk about safe AI, right? And it's not to try to put a guardrail at every step of the way.

It's really to fundamentally build something different than taking what they have and trying to, you know, protect the users from it. Any other question? Well, thank you very much.

Thank you.

I feel like you guys were listening to me because you had some really good questions, so I appreciate that. All right, thank you.

Powered by