it's clear that we are in a period of immense technological revolution.
maybe even disruption.
i decided that i need to be part of the revolution this time.
i was too young during the internet boom. anyway.
the product development process started almost 8 months ago.
it wasn't this exact product but after a few pivots remova was born.
simply put, remova is a non-technical, safe ai for enterprise.
it has everything companies need plus it's easy for teams to use.
our main goal here is ai adoption for companies.
so we give companies the ability to see and track ai adoption within the organization.
because
not everyone is a 25-year-old vibe coder or ai native.
i'm looking at the b2b ai adoption rate within developed countries.
it's not more than 25% when we look at actual daily usage.
i suppose only 5% are using a company-wide ai instead of everyone using their own gpt,
gemini etc.
of course i think ai companies are revenue-focused and this makes them go after more
technical ai usage.
in the end, tech teams are using the most tokens.
not an accountant.
not a manufacturer.
not a supermarket manager.
etc.
i want to help these people adopt ai.
because if they don't, they will get hurt by competitors who have advanced immensely thanks
to ai.
so the battle here is to get everyone using ai within an organization.
not just the it team or marketers.
of course i need to first convince the decision makers.
the managers, the owners.
then they need to convince their team.
we are not talking about asking chatgpt about the weather here.
we are talking about being ai native, and doubling iq with human intuition so every piece of
work done is high quality.
maybe even innovation.
ai cannot drive innovation but human driven ai can.
anyway this is not the only issue here.
while company employees use ai, there is a risk to be concerned about.
i'm talking about ai safety here again.
yes.
if you are not a cyber-security expert of course you are susceptible to ai risk.
you might share the wrong information, maybe even classified.
it might seem non-worrisome at first, while you were doing it.
but it puts the organization and the employee under great risk.
this risk is not worth it.
* data poisoning
* intellectual property loss
* privacy leakage
* prompt injection
* regulatory fines
and so on.
when building remova our first goal was always making sure the ai is safe for everyone.
so we developed many features and guardrails to protect teams and organizations at the same
time.
of course we also wanted to make sure the creativity of ai is not affected by all those
rules.
this wasn't an easy task, because sometimes it was unpredictable.
then we designed a multi-layered architecture.
it's all boring technical stuff.
but it's like flying, going to an airport with multiple security checks.
some are very easy like showing your id and your ticket.
some are more advanced like making sure the liquids are safe.
in the end, our tests indicate remova is 95% safer than flagship llms like chatgpt, claude,
gemini etc.
and the interesting thing is you still use these same models.
it's not like we created a dumb safe model. no. you use these flagship models.
if you don't want to use them you can use open source models. safe and secure.
and yeah this is my addition to this revolution.
i will share remova's journey in the following weeks.
right now it's just published.
wish me luck.
comments