Download "AI Course for Developers – Build AI-Powered Apps with React"

Download this video with UDL Client
  • Video mp4 HD+ with sound
  • Mp3 in the best quality
  • Any size files
Video tags
|

Video tags

ai course
learn ai
ai course for developers
ai engineering
react js
react.js
full-stack development
code with mosh
programming with mosh
mosh hamedani
large language models
llm
ai
genai
reactjs
generative ai
llms
gen ai
artificial intelligence
ai engineer
ai full course
ai for beginners
openai
ai developer
ai applications
programming
chatbots
aiappdevelopment
aicourses
Subtitles
|

Subtitles

00:00:00
[Music]
00:00:01
AI is everywhere, but have you actually
00:00:03
built something with it? In this course,
00:00:05
you will learn how to build real AI
00:00:07
powered features like the ones you see
00:00:09
in apps from Google, Amazon, and beyond.
00:00:12
We'll start by building a solid
00:00:14
foundation, understanding language
00:00:16
models, tokens, context windows,
00:00:18
choosing the right models, model
00:00:20
settings, and prompt engineering. Then
00:00:22
we'll build projects starting with a
00:00:24
chatbot that can answer questions about
00:00:26
an imaginary theme park, helping
00:00:28
visitors find what they need faster.
00:00:31
Next, we'll create a tool that analyzes
00:00:33
customer feedback and delivers clear,
00:00:36
actionable insights so users can make
00:00:38
shorter decisions in seconds. And
00:00:40
finally, we'll wrap up with powerful
00:00:42
open-source integrations you can run
00:00:45
anywhere. Along the way, you will learn
00:00:47
clean architecture principles, follow
00:00:49
best practices, and work with modern
00:00:51
tools like Bun, Tailwind, Shatsen,
00:00:54
Prisma, and Olama. By the end, you will
00:00:57
have the skills and confidence to build
00:00:59
AI powered apps people actually want to
00:01:01
use. I'm Mash Hamadani, a software
00:01:04
engineer with over 20 years of
00:01:05
experience, and I've taught millions how
00:01:08
to code and become professional software
00:01:10
engineers through my YouTube channel and
00:01:12
online school, codewithmarsh.com. If
00:01:14
you're new here, make sure to subscribe
00:01:16
as I upload new videos all the time.
00:01:18
Now, let's jump in and get started.
00:01:25
[Music]
00:01:31
Before we dive in, let's talk about what
00:01:33
you'll need to know to get the most out
00:01:35
of this course. The ideal student is
00:01:38
someone who already knows the basics of
00:01:40
front-end development and wants to bring
00:01:42
AI into their applications to make them
00:01:44
more engaging and smarter. That means
00:01:46
you should be comfortable with modern
00:01:48
JavaScript and TypeScript things like
00:01:50
arrow functions, destructuring,
00:01:52
promises, and async and await. You
00:01:54
should also know how to build simple
00:01:56
React applications. You should know how
00:01:58
to create components, work with JSX, and
00:02:01
use the state and effect hooks. Just the
00:02:04
basics. You don't need to be a React
00:02:05
expert. If you know a bit of backend
00:02:07
development and databases, that's great,
00:02:10
but not required. The same goes with AI.
00:02:12
You don't need any prior experience. I
00:02:14
will walk you through everything step by
00:02:16
step. If that sounds like you, you're
00:02:18
ready to get started.
00:02:21
[Music]
00:02:24
Assuming that you're the right student
00:02:26
for this course, let me give you a quick
00:02:28
overview of how it's structured and what
00:02:30
to expect. First, let me tell you what
00:02:33
this course is not. If you're looking
00:02:35
for one of those build an app with AI in
00:02:37
1 hour miracles, you know, where you
00:02:39
paste a few prompts into chat GPT, sit
00:02:42
back with the latte, and poof, your
00:02:44
billion dollar startup is ready. This
00:02:45
isn't that course. We are not doing VIP
00:02:47
coding here. We're building smart
00:02:49
features with AI, the kind that actually
00:02:51
make your apps more useful and more
00:02:53
engaging. And we'll do it the right way
00:02:55
so you understand exactly what's going
00:02:57
on under the hood. All right, here's how
00:02:59
the course is laid out. In section one,
00:03:02
we'll start with the foundations. You'll
00:03:04
learn what language models are, what
00:03:06
they can do, and how to work with them.
00:03:08
We'll talk about tokens, context
00:03:10
windows, model settings like
00:03:12
temperature, and how to call models.
00:03:14
This section sets you up with the
00:03:15
concepts you need before we dive into
00:03:17
building. Section two is all about
00:03:20
setting up a modern full stack project
00:03:22
from scratch. Now, we could use Nex.js
00:03:24
JS here, but I decided not to because
00:03:26
not everybody uses it or even likes it
00:03:29
and some developers prefer to keep the
00:03:31
front end and back end completely
00:03:33
separate. So in this course, we'll keep
00:03:36
the back end and front end completely
00:03:37
separate. This helps you clearly see how
00:03:40
the two parts talk to each other. We'll
00:03:42
use a modern stack with bun, express,
00:03:45
react, tailwind, and chaten. And don't
00:03:48
worry if you haven't used these tools
00:03:49
before. I'll teach you everything from
00:03:51
scratch. Now, if you prefer Nex.js, JS,
00:03:53
you can still apply everything you learn
00:03:55
in this course to a Next.js project. The
00:03:57
principles are exactly the same. Section
00:04:00
3 is our first big project, a chatbot
00:04:02
that can answer questions about an
00:04:04
imaginary theme park to help visitors
00:04:07
find the information they need faster.
00:04:09
We'll start with the back end, build it
00:04:11
step by step, and then refactor it using
00:04:13
clean architecture principles. Once the
00:04:15
back end is solid, we'll build the front
00:04:17
end the same way. We'll add features,
00:04:19
improve the UI, and make the chatbot
00:04:21
better with each lesson. This is where I
00:04:23
want you to code along with me. Don't
00:04:25
just watch. Understand the code, the
00:04:27
problems we run into, and why we solve
00:04:29
them the way we do. If it feels like I'm
00:04:32
going too fast, just watch one lesson at
00:04:34
a time, take notes, and then repeat it.
00:04:37
The lessons are short and focused, so
00:04:39
you won't need to remember a lot at
00:04:40
once. In section four, we'll dive into
00:04:43
prompt engineering, the art of writing
00:04:45
prompts that actually get the results
00:04:47
you want. You will learn how to give
00:04:49
context, control output format, use
00:04:52
examples, handle errors, and reduce
00:04:54
hallucinations. Section five is another
00:04:57
full stack project. This time, a product
00:05:00
review summarizer. You have probably
00:05:02
seen this on a lot of websites. A quick
00:05:04
summary of reviews so you can decide
00:05:06
faster. We'll build our own version from
00:05:08
scratch, complete with a database and
00:05:10
Prisma migrations. It's a bigger, more
00:05:13
complex project than the chatbot. So,
00:05:15
we'll get to explore a lot of techniques
00:05:17
and best practices that most courses
00:05:19
skip. And here's the cool part. What you
00:05:21
learn here isn't just for summarizing
00:05:23
text. You can apply the same techniques
00:05:25
to build all kinds of AI powered
00:05:27
features. Finally, section six is about
00:05:30
working with open-source models. You'll
00:05:33
learn why they matter, how to find them,
00:05:35
how to run them locally, and how to
00:05:37
integrate them into our applications.
00:05:39
This opens up a whole new world where
00:05:41
you're not tied to commercial APIs. Now,
00:05:44
throughout the course, I'll share tips,
00:05:46
tricks, and shortcuts you don't want to
00:05:48
miss. That's why it's important to watch
00:05:50
every lesson in order and code along,
00:05:52
especially during the projects. This is
00:05:54
a hands-on course, and you will get the
00:05:56
most out of it by being active from the
00:05:59
very beginning. Oh, and by the way, what
00:06:01
you're watching here on YouTube is
00:06:02
actually the first two hours of my full
00:06:05
7-hour course. I'm giving you this part
00:06:07
for free so you can get a feel for my
00:06:09
teaching style and see if you want to go
00:06:11
deeper. Next, we're going to set up our
00:06:13
development environment.
00:06:15
[Music]
00:06:21
Before we dive into coding, let's
00:06:23
quickly set up our development
00:06:24
environment. First, make sure you have
00:06:26
the latest version of Node on your
00:06:28
machine. So, open up a terminal window
00:06:30
and run node-v.
00:06:33
Look, on this machine, I'm running node
00:06:35
version 22.17.
00:06:37
So, make sure you have the same version
00:06:39
or higher. If not, head over to
00:06:41
node.js.org and download the latest
00:06:43
version. Now, for coding, we'll be using
00:06:46
VS Code. You're welcome to use your
00:06:48
preferred editor, but I highly encourage
00:06:50
you to use VS Code because throughout
00:06:52
the course, I'm going to show you a lot
00:06:53
of cool shortcuts and useful plugins
00:06:56
that not only help you write code
00:06:57
faster, they also make your job more fun
00:07:00
and enjoyable. So, that's all you need
00:07:02
for now. Node.js and VS Code. We'll
00:07:04
install any other tools as we go through
00:07:06
the course. All right, with your
00:07:08
environment ready now, let's get
00:07:09
started.
00:07:12
[Music]
00:07:14
Welcome back. In this section, we'll lay
00:07:16
the foundation for everything we'll
00:07:18
build in this course. We'll explore what
00:07:20
language models are, what we can do with
00:07:22
them, and how to use them effectively in
00:07:25
real world applications. We'll also
00:07:27
cover practical concepts like tokens,
00:07:30
cost, model selection, and key settings
00:07:32
that shape the model's behavior. Even if
00:07:35
you have used Chat JP before, this
00:07:37
section will give you a solid mental
00:07:39
model. So, when we start building,
00:07:40
you'll know exactly how and why things
00:07:43
work the way they do. Now, let's jump in
00:07:45
and get started.
00:07:48
[Music]
00:07:54
Since Chat JPT came out, the software
00:07:56
world has changed fast. New models are
00:07:58
being released every month, new tools,
00:08:00
new APIs, new expectations, and new job
00:08:03
titles. One of the most exciting ones is
00:08:05
AI engineer. You have probably seen it
00:08:08
pop up in job postings. But what exactly
00:08:10
is that? Well, it's not the same as a
00:08:13
machine learning engineer. Machine
00:08:14
learning engineers build and train
00:08:16
models. They clean data, tune
00:08:18
architectures, and optimize training
00:08:20
pipelines. It's mathheavy and research
00:08:23
focused. AI engineers, on the other
00:08:25
hand, use pre-trained models, especially
00:08:28
large language models, to build smarter
00:08:31
applications. They don't need to
00:08:32
understand the math behind the model.
00:08:35
They need to understand how to use it
00:08:37
and how to integrate it into real world
00:08:39
apps. It's a lot like using a database.
00:08:42
As a software engineer, you don't need
00:08:44
to know how MySQL works internally. You
00:08:46
just need to know how to query it,
00:08:48
structure your data, and build a
00:08:50
reliable product. In the same way, as an
00:08:53
AI engineer, your job is to understand
00:08:55
how to use these powerful AI models to
00:08:57
solve real problems. Right now,
00:09:00
companies around the world are hiring
00:09:02
engineers who know how to build AI
00:09:04
powered features. things like
00:09:06
summarization, translation, intelligent
00:09:08
search, automation, and personalized UX.
00:09:12
Let me show you just a few examples.
00:09:13
Amazon now shows an AI generated summary
00:09:16
of product reviews on the product page.
00:09:18
This saves shoppers time and increases
00:09:21
conversion by making the buying decision
00:09:23
faster. You'll see the same pattern in
00:09:25
many other applications where AI is
00:09:28
being used to surface quick takeaways
00:09:30
from long threads. As another example,
00:09:32
Active Campaign, which is a marketing
00:09:34
platform, lets marketers use AI to
00:09:37
generate full email campaigns with just
00:09:40
a few prompts. Instead of starting from
00:09:42
scratch, they get instant draft content
00:09:44
they can publish and send. Here's
00:09:46
another example. On Twitter or X, if you
00:09:49
see a post in another language, you'll
00:09:51
often see a translate link. Behind the
00:09:53
scenes, a large language model detects
00:09:55
the language, determines your local, and
00:09:58
generates a translated version
00:10:00
instantly. This feature is becoming
00:10:02
standard across social platforms and
00:10:04
news applications. Here's another
00:10:06
example. Platforms like YouTube and
00:10:08
Twitch use AI to automatically flag
00:10:11
things like spam, hate speech, or
00:10:13
inappropriate content. It helps keep
00:10:16
communities safe without needing
00:10:17
thousands of human moderators watching
00:10:20
everything in real time. Here's another
00:10:22
example. Freshesk, which is a customer
00:10:24
support platform, uses AI to
00:10:26
automatically categorize, prioritize,
00:10:28
and route incoming support tickets. So
00:10:31
instead of agents manually sorting
00:10:33
through every request, the system sends
00:10:36
each ticket to the right team. And that
00:10:38
means agents spend less time organizing
00:10:40
and more time solving. Let's look at
00:10:42
another example. On Redfin, which is a
00:10:44
platform for finding homes, when we are
00:10:47
viewing a property listing, there's a
00:10:49
built-in chat assistant that can answer
00:10:51
questions about that specific property.
00:10:54
So, instead of waiting to talk to an
00:10:55
agent, users can get the basic info they
00:10:58
need right away. And these are just a
00:11:00
few examples. The possibilities are
00:11:02
endless. Everyday developers are adding
00:11:04
smart features like these into their
00:11:06
applications. Not just for novelty, but
00:11:09
to save time, reduce costs, and build
00:11:11
smarter, more helpful experiences. I
00:11:14
believe that going forward, every
00:11:16
software engineer will be expected to
00:11:18
know how to work with AI models just
00:11:20
like we're expected to work with
00:11:22
databases today. You will need to know
00:11:24
about large language models or LLMs,
00:11:27
prompt engineering, retrieval, augmented
00:11:29
generation or rag, vector databases,
00:11:32
building agents, and so on. This course
00:11:34
is your first step into that world. So,
00:11:37
if you're a developer and you want to
00:11:38
keep up with where the industry is
00:11:40
headed, you're in the right place. In
00:11:42
the next lesson, we'll break down what a
00:11:44
large language model actually is in
00:11:46
simple practical terms.
00:11:49
[Music]
00:11:55
Now that you understand why learning AI
00:11:58
matters, let's start unpacking the
00:11:59
fundamentals. In this lesson, we're
00:12:01
going to answer a basic but important
00:12:03
question. What is a large language
00:12:06
model? Let's break it down. At its core,
00:12:08
a language model is a system that's
00:12:10
trained to understand and generate human
00:12:13
language. There are several language
00:12:15
models available today. Some are
00:12:17
commercial like GPT by OpenAI, Gemini by
00:12:20
Google, Claude by Anthropic, Grock by
00:12:23
XAI which is Elon Musk's company. We
00:12:26
also have open- source models like Llama
00:12:28
by Meta, Mistral by European company
00:12:31
also named Mistl and many many more.
00:12:33
They're called large because they're
00:12:35
trained on massive amounts of text.
00:12:37
Everything from books and articles to
00:12:39
forums, code, documentation, and more.
00:12:42
Through this training, they learn
00:12:44
statistical patterns in a language.
00:12:46
Things like grammar, sentence structure,
00:12:48
tone, common facts, and phrasing. So
00:12:51
when we prompt an LLM with, let's say,
00:12:53
the capital of France is, it doesn't
00:12:56
look up the answer. It simply predicts
00:12:58
what a helpful response should look like
00:13:00
based on patterns it has seen during
00:13:02
training. It's like autocomplete but on
00:13:04
steroids. In practice, a large language
00:13:07
model is a giant mathematical structure,
00:13:10
usually multiple gigabytes in size, made
00:13:12
up of billions of parameters. Those
00:13:15
parameters represent patterns in
00:13:17
language like grammar, facts, tone, and
00:13:20
style. These models don't understand
00:13:22
language the way we do. They don't have
00:13:24
beliefs or intelligence. They're just
00:13:26
very good at predicting what comes next.
00:13:29
The output is often so well written, so
00:13:32
fluent on structure that it feels like
00:13:34
there's intelligence behind the scenes.
00:13:36
But there isn't. It's just mass and
00:13:38
probability based entirely on training
00:13:40
data. That's why if you ask chat GPT the
00:13:43
same question multiple times, we'll
00:13:45
often get slightly different answers.
00:13:48
It's not repeating a stored response.
00:13:50
It's generating new output each time
00:13:52
based on likelihood, not truth. And that
00:13:55
brings us to a very important point.
00:13:57
Because these models don't understand
00:13:59
what they're saying, the quality of
00:14:01
training data is everything. If a model
00:14:03
is trained unbiased, inaccurate, or
00:14:06
lowquality data, its responses will
00:14:08
reflect that. That's why some models
00:14:11
appear politically biased or why some
00:14:13
models give completely false answers
00:14:15
with total certainty. It all comes back
00:14:18
to the data. These days, many people use
00:14:20
language models to generate code and
00:14:22
it's impressive at first. But here's the
00:14:25
issue. These models are trained on
00:14:27
billions of lines of code from public
00:14:29
repositories including GitHub. And a
00:14:31
huge portion of that code is poorly
00:14:33
written, outdated, filled with
00:14:35
antiatterns or just broken. The model
00:14:38
doesn't know that. It simply learns
00:14:40
what's common, not necessarily what's
00:14:42
correct or maintainable. So when it
00:14:45
generates code, it might look clean,
00:14:47
sound confident, even compile, but it
00:14:49
could be buggy, insecure, or full of bad
00:14:52
practices. That's the danger of blindly
00:14:55
trusting model generated code. We get
00:14:57
something that looks professional but
00:14:59
isn't always reliable. So once again, I
00:15:01
want to emphasize that training matters
00:15:03
a lot. A model is only as good as the
00:15:06
data it's trained on. Garbage in,
00:15:08
garbage out. If it learns from clean,
00:15:10
highquality code and accurate language
00:15:13
data, it performs well. If it trains on
00:15:15
messy, biased, or incorrect data, the
00:15:18
responses can be misleading or flatout
00:15:21
wrong. Now, training a large model from
00:15:23
scratch isn't just about data. It also
00:15:26
requires enormous amounts of compute
00:15:28
power. We're talking thousands of GPUs,
00:15:30
weeks, or months of continuous training,
00:15:33
and infrastructure that only a handful
00:15:35
of companies in the world can afford.
00:15:37
That's why most of us don't train models
00:15:39
ourselves. As developers, our job isn't
00:15:42
to become machine learning engineers.
00:15:44
It's to understand how to talk to these
00:15:46
models via prompting, how to handle
00:15:49
their limitations, and how to integrate
00:15:51
them into our applications to build
00:15:53
smarter features. Just like we don't
00:15:55
need to build our own database engine,
00:15:57
we just need to know how to use one.
00:16:00
That's the mindset you will develop
00:16:02
throughout this course. In the next few
00:16:03
lessons, we'll dig deeper into how these
00:16:06
models work, things like tokens, cost,
00:16:08
and how to pick the right one for the
00:16:10
job.
00:16:12
[Music]
00:16:18
So I told you that as a developer you
00:16:20
need to learn how to integrate language
00:16:22
models into your applications. Now let
00:16:24
me show you what that actually looks
00:16:26
like. Think about a typical application
00:16:28
structure. We have a front end maybe
00:16:31
built with React or something similar, a
00:16:33
back end, a database for storing our
00:16:36
application data and now a language
00:16:38
model ready to generate or process
00:16:40
content. The LLM is usually not the
00:16:43
center of our application. It's a
00:16:44
supporting system. We send it input or a
00:16:47
prompt. We get a response and use that
00:16:50
response to enhance the user experience.
00:16:52
And the way we use it depends on our
00:16:54
feature. For example, a very common use
00:16:56
case is summarization. I showed you how
00:16:59
Amazon uses that to summarize reviews.
00:17:01
This is getting very common in modern
00:17:03
applications. We can also use LLMs for
00:17:06
generating content like emails, product
00:17:08
descriptions, social media posts and so
00:17:11
on. Another common use case is text
00:17:13
classification. We can use a model to
00:17:15
categorize input. For example, is this
00:17:18
spam or not? Is this review positive or
00:17:20
negative? Is this support ticket about
00:17:22
billing, login or cancellation? We can
00:17:25
ask LLMs to generate responses as JSON
00:17:28
objects like this. That means our back
00:17:30
end can easily parse the response, store
00:17:33
it and make decisions based on it. We
00:17:36
can also use LLMs for translating text
00:17:38
from one language to another. I showed
00:17:40
you how Twitter or X does this. Also,
00:17:43
the new iOS does that to translate texts
00:17:45
in real time. Another great application
00:17:48
of language models is in extracting
00:17:50
information. For example, we can give an
00:17:53
LLM some messy text like a PDF and ask
00:17:56
it to pull out structured data like
00:17:58
invoice number, amount, names,
00:18:01
addresses, and so on. We can also use
00:18:03
LLMs to build and integrate chat bots
00:18:06
into our applications. We can build chat
00:18:08
bots that answer questions based on user
00:18:10
data or business documents and so on.
00:18:13
All of these use cases follow the same
00:18:15
pattern. Text in, text out. We give the
00:18:17
model a prompt and it gives us back a
00:18:20
response. The response can be plain
00:18:22
text. It can be an array, a JSON object,
00:18:25
a number, an image, or whatever is
00:18:27
useful. Now that you have seen what LLMs
00:18:29
can do, let's look at what's actually
00:18:31
inside them. In the next lesson, we'll
00:18:33
talk about how these models work under
00:18:35
the hood.
00:18:37
[Music]
00:18:43
Now that you know what language models
00:18:45
are and what we can do with them, let's
00:18:47
take a closer look at something that
00:18:49
plays a big role in how we use them
00:18:51
effectively, and that's tokens. What are
00:18:53
tokens? Well, when we send a prompt to a
00:18:56
language model, it doesn't process the
00:18:58
input as plain text. Instead, it breaks
00:19:01
the text down into smaller units called
00:19:03
tokens. These tokens can be whole words,
00:19:07
parts of words, punctuation, even emojis
00:19:09
or spaces. So tokens aren't the same as
00:19:12
characters or words. They fall somewhere
00:19:14
in between. To see this in action here
00:19:17
on Google, search for OpenAI tokenizer.
00:19:20
On this page, you can type a prompt here
00:19:23
or click show example. Look, this piece
00:19:27
of text has 252 characters and it's
00:19:30
broken down into 53 tokens. Down below
00:19:34
you can see these tokens colorcoded. So
00:19:36
each piece represents a token. Now why
00:19:39
does this matter? Because tokens
00:19:40
directly impact cost. Let's take open as
00:19:43
an example. At the time of this
00:19:45
recording, generating 1 million output
00:19:47
tokens with GPT40 mini costs 60. With
00:19:52
GPT4.1, the same task would cost $8.
00:19:56
That's 13 times more. So if you're
00:19:58
summarizing long documents or generating
00:20:01
large amounts of content, token usage
00:20:03
and therefore cost can add up quickly.
00:20:06
That's why when choosing a model, cost
00:20:09
should be one of the key factors. We
00:20:10
shouldn't just go for the latest or most
00:20:13
powerful model. Think about what your
00:20:15
app actually needs. It's kind of like
00:20:17
buying a phone. You don't always need
00:20:19
the latest iPhone Pro Max. Sometimes a
00:20:22
mid-range phone gives you everything you
00:20:24
need. Same logic applies here. So tokens
00:20:27
cost money, but there's also a limit to
00:20:29
how many tokens a model can handle at
00:20:32
once. That limit is called the context
00:20:35
window. The context window includes our
00:20:37
prompt, which is the input, the model's
00:20:40
response, and the chat history. That's
00:20:42
if we're building a conversational
00:20:44
experience. Again, all of this is
00:20:46
measured in tokens. For example, GPT4
00:20:49
Mini has a context window of about
00:20:52
128,000 tokens. GPT 4.1 can handle
00:20:56
around 1 million tokens. Mistrol, which
00:20:58
is an open-source model, supports around
00:21:01
32,000 tokens. So, if we send a very
00:21:04
long prompt and hit the token limit or
00:21:06
the context window, the model will stop
00:21:09
even in the middle of a sentence. That's
00:21:11
why it's important to know how much
00:21:13
context our model can handle. But again,
00:21:15
we don't always need the largest context
00:21:17
window or the biggest model. Mistrol,
00:21:20
for example, might be perfectly
00:21:22
sufficient for tasks like summarizing a
00:21:24
blog post or classifying a support
00:21:26
ticket. It all depends on the needs of
00:21:28
our application. In the next lesson, I'm
00:21:31
going to show you how to count tokens
00:21:33
programmatically so we can estimate cost
00:21:35
and stay within limits before sending a
00:21:38
request.
00:21:41
[Music]
00:21:46
All right. Now, let me show you how to
00:21:47
count tokens in code. So, open up a
00:21:50
terminal window. Let's go to our desktop
00:21:53
or somewhere on your machine and create
00:21:55
a directory called playground. This is
00:21:58
going to be our playground project.
00:22:00
Next, we cd into this directory and run
00:22:03
npm init-y
00:22:06
to create a package.json file without
00:22:08
answering basic questions about our
00:22:10
project like its name, version, and so
00:22:12
on.
00:22:14
So, here's our package.json file. Great.
00:22:17
Now we're going to install a library
00:22:19
called tick token. This is the tokenizer
00:22:22
used by open AI models. So different AI
00:22:25
models, different platforms have their
00:22:27
own tokenizer library. Okay,
00:22:30
good. Now to open this in VS Code, we're
00:22:33
on code space period. Now if this
00:22:36
doesn't work on your machine, just drag
00:22:38
and drop this directory onto VS Code.
00:22:41
All right. Now we add a new file here,
00:22:44
index.js JS on the top we import a
00:22:48
function called get encoding from tick
00:22:51
token. Next we call get encoding and
00:22:55
give it an argument. The argument is an
00:22:58
encoding. Here we have a few options.
00:23:00
These options or these encodings are
00:23:02
dictionaries that map token ids to the
00:23:06
actual tokens. For example, we might
00:23:08
have a token ID let's say 9004 that maps
00:23:12
to the word hello. Okay. So we're going
00:23:15
to pick CL 100K underline base that is
00:23:18
short for chat language and 100K means
00:23:22
in this dictionary we have about 100,000
00:23:25
unique tokens. So we get an encoding and
00:23:28
store it in a constant. Next we call
00:23:32
encoding encode and give it a piece of
00:23:36
text like hello world. This is the first
00:23:39
test of tick token library. Okay, next
00:23:45
we get our tokens and store them in a
00:23:48
constant.
00:23:50
And finally, we log them on the console.
00:23:54
Okay, now let's open up a terminal
00:23:56
window here by pressing control and
00:23:58
backtick. Now let's run our application
00:24:00
by running node index.js.
00:24:04
All right, we got a syntax error saying
00:24:06
cannot use import statement outside a
00:24:09
module. Why are we getting this? Because
00:24:11
by default node interprets our
00:24:14
JavaScript files as common JS modules.
00:24:17
In common JS modules we have a different
00:24:19
format for importing functions. So
00:24:22
instead of this syntax which is called
00:24:24
ES module syntax we have a different
00:24:27
syntax which is called common.js syntax.
00:24:31
So we have to write code like const get
00:24:34
encoding
00:24:35
equals require
00:24:38
t take token that is the common JS
00:24:40
format which is older nobody uses that
00:24:42
anymore so to tell NodeJS to use the new
00:24:45
format we have to go to our package JSON
00:24:48
file we can open it right here
00:24:51
package.json JSON, we set the type
00:24:55
property to module.
00:24:59
Okay, now back to the terminal. Let's
00:25:02
rerun index.js. All right, look. We got
00:25:05
an array of 13 items where each item is
00:25:08
a number. Each number is a token ID that
00:25:11
maps to an actual token. So when working
00:25:14
with large amount of text, we can use
00:25:16
the tick token library to count tokens
00:25:19
before sending a prompt to a language
00:25:21
model.
00:25:23
Hey, just a quick reminder, as I told
00:25:25
you at the beginning of this video, what
00:25:27
you're watching right now is actually
00:25:29
the first 2 hours of my full 7-hour
00:25:31
course on building AI powered apps. So
00:25:34
once you finish this tutorial, if you
00:25:36
want to learn more, check out the link
00:25:38
in the description to enroll in the
00:25:40
complete course. I would love to see you
00:25:41
inside.
00:25:43
[Music]
00:25:48
These days there are hundreds of AI
00:25:50
models out there and new ones are
00:25:52
released almost every week. So which
00:25:55
model should we choose? There is no
00:25:56
single right answer. The model we choose
00:25:59
really depends on our application and
00:26:01
its requirements. In this lesson, I'm
00:26:03
going to give you a framework to make
00:26:04
that decision. Now, I'm not going to
00:26:06
give you a fixed list of model names to
00:26:09
choose because model names change
00:26:10
quickly. So instead, we're going to
00:26:12
focus on the criteria that matter when
00:26:14
choosing a model. The first question we
00:26:17
need to ask is how smart does the model
00:26:19
have to be? If we want to solve complex
00:26:22
problems, we want a model with stronger
00:26:24
reasoning. But if we just need to
00:26:26
extract text, classify a review, or
00:26:28
summarize a short paragraph, we don't
00:26:30
need the top tier model. A smaller model
00:26:33
is good enough to do the job. The next
00:26:35
question is, how fast do we need a
00:26:37
response? Bigger models are often
00:26:39
slower, especially when generating long
00:26:41
outputs. If you're building a real-time
00:26:44
user experience, like autocomplete,
00:26:46
quick summaries, or short form answers,
00:26:49
we'll want a faster model. The next
00:26:51
question is, what kinds of input and
00:26:53
output are we going to send to and
00:26:55
receive from a model? Text is the most
00:26:57
common, but newer models can also
00:26:59
process images, audio, video, or even a
00:27:03
combination of those. These are called
00:27:05
large multimodal models or LMMs. So if
00:27:09
our application involves anything beyond
00:27:10
text, for example, describing what's in
00:27:13
an image, then we need a multimodal
00:27:15
model. The other factor is cost and I
00:27:18
told you that this is based on the
00:27:20
number of tokens. So if you're
00:27:22
processing long documents or generating
00:27:24
a lot of content, cost can add up
00:27:26
quickly. The next factor is the context
00:27:28
window. We talked about this before.
00:27:30
That's how much text the model can
00:27:32
process at once. And that includes our
00:27:35
input, the model's response, and the
00:27:37
chat history. If you're building a
00:27:39
conversational experience, if you're
00:27:41
summarizing long documents, analyzing
00:27:44
code bases, or having long back and
00:27:46
forth conversations, we need a model
00:27:48
with a large context window. The other
00:27:51
factor is privacy. If our application is
00:27:53
processing sensitive data like patients,
00:27:56
medical records, we probably don't want
00:27:58
to send that data to OpenAI servers.
00:28:00
This is where open-source self-hosted
00:28:03
models can help. We'll talk about that
00:28:05
later in the course. Now, to see this in
00:28:07
action on Google, search for OpenAI
00:28:10
models. On this page, we can see the
00:28:13
featured models. At the time of this
00:28:15
video, we have GPT4.1,
00:28:17
O4 Mini, and 03. Now, up here, we can
00:28:21
click compare models and compare these
00:28:24
three models or any other models you're
00:28:26
interested in. Now look, 03 is their
00:28:30
most powerful reasoning model. You can
00:28:32
see that indicated down here. But this
00:28:35
is also the slowest of all three models.
00:28:38
So there is no best single model. If
00:28:40
you're building an application that
00:28:42
requires solving complex problems, then
00:28:44
we need a model with strong reasoning
00:28:47
capability. But otherwise, if all we
00:28:49
want to do is let's say summarize
00:28:51
documents or extract structured data, we
00:28:54
can go with a simpler model. Now all
00:28:56
these three models as you can see are
00:28:58
multimodel models because they support
00:29:01
text and image as the input. But for the
00:29:04
output they can only generate text. So
00:29:06
if you need to generate images in our
00:29:08
application then we have to pick a
00:29:10
different model. At the time of this
00:29:11
video we have a model called
00:29:15
GPT image one. Now this model can
00:29:19
support text and image as the input but
00:29:22
it can only generate images in the
00:29:24
output. So it cannot generate any text.
00:29:26
Now that aside down below you can see
00:29:28
the pricing. We have different prices
00:29:30
for input and output. So if we need to
00:29:34
process large documents like contracts,
00:29:36
cost can add up pretty quickly. So
00:29:38
always compare the cost of models and go
00:29:41
with the one that fits your application.
00:29:43
Now moving on below that we have the
00:29:45
context window of these models. So this
00:29:48
model has the context window of 200,000
00:29:50
token. This other model has a context
00:29:53
window of about 1 million tokens. Now,
00:29:56
we also have another factor here that's
00:29:58
max output tokens. That's the number of
00:30:01
tokens that can be in one response. So,
00:30:04
this model, even though it has a larger
00:30:07
context window, it can process more text
00:30:09
or longer conversations, but each
00:30:12
response can be a maximum of 32,000
00:30:15
tokens. In contrast, with this model,
00:30:18
our responses can be three times larger.
00:30:20
The other factor we have here is the
00:30:22
knowledge cutoff and that's when the
00:30:24
training of these models stopped. So
00:30:26
sometimes you can find older models that
00:30:28
don't have up-to-ate knowledge of the
00:30:30
world but they could be perfect for your
00:30:32
application. So once again there is no
00:30:35
best model. All right, we're done with
00:30:36
this lesson. In the next lesson we're
00:30:39
going to talk about model settings and
00:30:40
see how different parameters affect the
00:30:43
output.
00:30:45
[Music]
00:30:51
In this lesson, we'll take a look at the
00:30:53
settings that control how a language
00:30:54
model behaves. So, here on the OpenAI
00:30:57
website, on the models page, let's grab
00:30:59
GPT 4.1. On this page, you can see all
00:31:02
the details about this model like its
00:31:05
level of intelligence, the speed, the
00:31:07
price of input and output tokens, as
00:31:10
well as the modalities. We can also see
00:31:12
the context window, max output tokens
00:31:14
and the knowledge cutoff date. Now up
00:31:17
here we can try this in the playground.
00:31:19
First we have to log in.
00:31:25
All right. Here's the model playground.
00:31:27
Now the first time you want to use this,
00:31:29
you have to add credit to your account.
00:31:31
So up here, click the settings icon.
00:31:34
Then go to the billing page. On this
00:31:37
page, you can add your debit or credit
00:31:39
card and add some credit to your
00:31:41
account. Once you do that, now let's go
00:31:43
back to the playground. So, the model is
00:31:46
GPT4.1. We can change this to any
00:31:48
models. Now, all these models have a few
00:31:51
common settings we're going to talk
00:31:52
about in this lesson. The first one is
00:31:54
text format. Here we have three options.
00:31:57
We have text, which is plain text, as
00:31:59
well as JSON object and JSON schema.
00:32:02
Let's see the differences in action. So
00:32:04
I'm going to go with text and send a
00:32:06
prompt like give me three benefits of
00:32:10
exercising.
00:32:12
So this gives us a short answer in plain
00:32:15
text. We have all seen this before. Now
00:32:17
let's see what happens if we change this
00:32:19
to
00:32:21
JSON object.
00:32:22
Now we repeat the prompt and say give me
00:32:26
three benefits of exercising but we also
00:32:29
add as a JSON object.
00:32:34
Now take a look. The model generated a
00:32:37
JSON object with this format. So we have
00:32:39
an object with a single property called
00:32:41
benefits of exercising which is an array
00:32:43
of strings. Now if the response is not
00:32:46
colorcoded on your machine, just click
00:32:48
this icon. With this you can toggle
00:32:50
between plain text and JSON format.
00:32:53
Okay. So JSON is useful if you want to
00:32:55
parse the response in our application.
00:32:57
But what if we expect a different type
00:32:59
of JSON object? So instead of having a
00:33:01
property like benefits of exercising,
00:33:04
let's say we want to have a property
00:33:05
like exercise which would be an object
00:33:08
with another property called benefits.
00:33:10
This is where we can use a JSON schema.
00:33:14
So we change the format to JSON schema.
00:33:17
In this box, we should define the shape
00:33:20
of our JSON objects. Now this format is
00:33:23
a little bit complicated. So it's easier
00:33:24
to generate it using AI. Here we can
00:33:27
describe the kind of JSON object we
00:33:29
expect in the response. For example, we
00:33:32
can say generate a JSON object like the
00:33:35
following.
00:33:37
So we want to have an object. In this
00:33:39
object, we want to have a property
00:33:41
called exercise
00:33:43
which would be an object itself.
00:33:46
Now inside this object, we want to have
00:33:48
a property called benefits which would
00:33:51
be an array. So we give it an example
00:33:53
and then have it create this schema for
00:33:55
us.
00:33:58
All right, look. So here's the name of
00:33:59
the schema, exercise schema. And here we
00:34:02
have more details about this schema. So
00:34:05
this is an object with these properties.
00:34:08
In this object, we have a property
00:34:11
called exercise, which is an object
00:34:13
itself with these properties. In this
00:34:16
object, we have a property called
00:34:17
benefits, which is an array. Here's a
00:34:20
description saying a list of benefits
00:34:23
associated with the exercise. Next, we
00:34:25
have information about the types of
00:34:27
items in this object. So, each element
00:34:29
or each item is a string. And here's a
00:34:31
description of each element. Next, we
00:34:34
have the validation properties like
00:34:37
required. So, the benefits property is
00:34:39
required. And no additional properties
00:34:42
are allowed. Of course, we can always
00:34:43
change this to fit our application. So,
00:34:46
let's go with the schema.
00:34:48
Save.
00:34:50
Now we repeat the last prompt and say
00:34:53
give me three benefits of exercising as
00:34:57
a JSON object.
00:35:01
All right, take a look. This time the
00:35:03
model generated a different kind of JSON
00:35:05
object. So we have this object with a
00:35:07
property called exercise which is an
00:35:09
object with a property called benefits
00:35:11
which is an array of strings. So that's
00:35:14
text format. Now let's go back to plain
00:35:17
text. Then we have temperature. With
00:35:20
this we can control how random or
00:35:22
creative the response can be. This is a
00:35:25
value between 0 and 2. But we never set
00:35:28
it to extreme values. Here's a
00:35:30
guideline. We use a low temperature like
00:35:33
2 to4 for logical precise answers like
00:35:36
summarization, answering factual
00:35:38
questions and so on. We use higher
00:35:40
temperature like a value between 7 to
00:35:43
1.0 zero for creative expressive tasks
00:35:45
like brainstorming, writing, marketing,
00:35:48
copy and so on. So it's best to stick to
00:35:50
this range. Don't go with extreme values
00:35:52
like zero or two because the model can
00:35:54
go crazy. Let me show you an example. So
00:35:56
here we have set the temperature to two.
00:35:59
Now let's say write a story about a
00:36:03
robot. Look what happens.
00:36:07
The model starts generating a story
00:36:09
about a robot. But as we progress, look,
00:36:12
it's generating gibberish. There is
00:36:14
nonsense coming out because the model is
00:36:16
getting extremely creative. This is the
00:36:19
problem with really high temperatures.
00:36:21
So, a good rule of thumb is to set it to
00:36:24
7. This is a good balance between
00:36:26
logical and creative responses. Next, we
00:36:28
have max tokens and this sets the
00:36:31
maximum length of the response. Now, the
00:36:33
value we set here really depends on what
00:36:35
problem we're trying to solve. If you
00:36:36
want to generate something short like a
00:36:38
tweet, we have to go with a lower value.
00:36:40
Otherwise, we may waste our tokens and
00:36:43
pay more than we need. But when using
00:36:44
lower values, keep in mind that the
00:36:46
response can be cut off mids sentence.
00:36:48
Let me show you. So, I'm going to set
00:36:50
max tokens to 50. Now, once again, we're
00:36:54
going to say write a story about a
00:36:57
robot.
00:37:00
All right. So, it says in the quiet town
00:37:02
of Maplewood, there was a small robot
00:37:04
named Pixel. Pixel was built by a kind
00:37:07
inventor, Mrs. Rivera, who wanted to
00:37:10
help. But look, the sentence is cut off.
00:37:13
So to prevent this, we have to be more
00:37:15
precise with our prompt. This is one of
00:37:17
the prompt engineering techniques. There
00:37:19
are many more techniques we'll cover
00:37:20
later in the course. So here we can say,
00:37:22
write a story about a robot in 50 tokens
00:37:27
or less. Write a complete answer without
00:37:31
cutting off mid sentence.
00:37:36
All right. Now, with this second story,
00:37:38
look, it's saying, "From that day on,
00:37:41
Orbit became the twin's favorite friend.
00:37:43
Always helping those in need. Robot
00:37:45
heart glowing with happiness."
00:37:47
Beautiful. So, that was Max tokens. We
00:37:50
also have top P. This is another way to
00:37:52
control randomness, but it works a bit
00:37:54
differently than temperature. Let's say
00:37:56
as part of generating the response, the
00:37:58
current token is ant. And here are all
00:38:01
the possible options for the next word.
00:38:03
Now next to each token you can see its
00:38:06
probability based on the training data.
00:38:08
If we set top P to one, we tell the
00:38:10
model to use the full range of
00:38:12
possibilities. So any of these words or
00:38:14
any of these tokens can be used to
00:38:16
generate the response. But if we use a
00:38:18
lower top P like.3 that makes the model
00:38:21
focus only on the most likely next
00:38:23
words. So once again, top is another way
00:38:26
to control randomness. It works
00:38:28
differently from temperature. In
00:38:30
practice, we usually change one or the
00:38:32
other, but not both. If you're not sure
00:38:34
which one to use, stick with temperature
00:38:36
and set top P to one. So these are all
00:38:39
the common settings. We also have store
00:38:41
logs which is used for logging and
00:38:43
debugging. By default, it's enabled
00:38:45
which means all our prompts will be
00:38:47
logged on OpenAIS servers. To see them,
00:38:50
we go to the dashboard.
00:38:52
Then we go to the logs.
00:38:55
On this page, you can see all the
00:38:57
prompts we have sent to OpenAI. We have
00:38:59
the input, the output, the model, and
00:39:02
the date time. If we click any of these
00:39:04
items, we can see more details. So up
00:39:07
here we can click details. We can see
00:39:09
the exact day time as well as the ID of
00:39:12
the response. So each response has a
00:39:14
unique identifier and this is useful for
00:39:17
creating the conversation state. So the
00:39:19
model remembers our previous responses.
00:39:21
We'll talk about this later in the
00:39:23
course. We can also see the model that
00:39:24
was used, the number of tokens, the
00:39:26
response format, max output tokens,
00:39:29
temperature, and so on.
00:39:32
[Music]
00:39:38
All right. Now, let me show you how we
00:39:39
can call models in code. To do that,
00:39:41
first you have to create an API key. So,
00:39:43
on the top, let's go to the settings
00:39:45
page. Then, we go to API keys and create
00:39:49
a new secret key. We give it a name.
00:39:52
This could be the name of our
00:39:53
application like my playground app.
00:39:57
Next, we assign it to a project. We have
00:39:59
the default project, but we can also
00:40:01
create multiple projects and assign
00:40:03
different keys to different projects.
00:40:06
Now, let's create the key.
00:40:08
Good. Let's copy this back to VS Code.
00:40:12
We remove all the code here and declare
00:40:15
a constant called OpenAI API key. Reset
00:40:20
it to this key. Now I got to emphasize
00:40:22
this is just for demonstration. In a
00:40:24
real application, we should never store
00:40:26
API keys in the source code because with
00:40:29
that anyone who has access to your
00:40:30
source code can use your API key and
00:40:33
you'll be the one paying for it. So as a
00:40:35
best practice, we should always store
00:40:37
the keys outside of our source code
00:40:39
using environment variables. We'll talk
00:40:41
about that later when we start building
00:40:43
projects. Next, we should install the
00:40:45
OpenAI library. So we open a terminal
00:40:48
window and run npm install open AI.
00:40:53
All right, good. Now this library is
00:40:56
just a wrapper around open AAI API. So
00:40:58
it gives us a class with a bunch of
00:41:00
methods that we use behind the scene.
00:41:02
It's going to make HTTP calls to the API
00:41:04
exposed by OpenAI. If you go to the docs
00:41:08
and look at the libraries page, you can
00:41:12
see that they also have official
00:41:15
libraries or SDKs for JavaScript,
00:41:18
Python, Net, Java, and Go. There are
00:41:21
also a bunch of libraries built and
00:41:22
maintained by community for other
00:41:24
languages. So back to the code on the
00:41:27
top, first we import the OpenAI class
00:41:30
from the OpenAI module. Next, we create
00:41:33
an instance of this class. So we declare
00:41:35
a constant called client because this is
00:41:38
a client to open AAI platform. We set it
00:41:41
to a new instance of open AAI
00:41:44
and provide an object and here we set
00:41:47
API key to open AAI API key. Next we
00:41:52
call client
00:41:54
responses.create
00:41:56
and give it an object. Here we can set
00:41:59
the model to let's say GPT-4.1.
00:42:05
We set the input to our prompt like
00:42:08
write a story about a robot.
00:42:11
We can also set the other settings we
00:42:13
talked about in the previous lesson like
00:42:15
temperature. Let's set it to 7.
00:42:19
And we set max output tokens to let's
00:42:22
say 50. Now this method returns an API
00:42:27
promise. So we have to await it to get
00:42:29
the response. So let's await this
00:42:34
and get the response.
00:42:37
And finally we log the response on the
00:42:39
console.
00:42:42
Okay. Now back to the terminal. Let's
00:42:44
run node index.js.
00:42:47
Now the terminal freezes because we are
00:42:49
waiting for the response to be
00:42:51
generated. But later we'll enable
00:42:53
streaming so you can see the response as
00:42:55
it's being generated. Okay. Now here we
00:42:57
have a bunch of properties in the
00:42:58
response. The one we use most of the
00:43:00
time is output text. This is the
00:43:02
response generated by the model. But we
00:43:04
also have a few other useful properties.
00:43:07
We have usage where we can see the
00:43:09
number of input and output tokens. Now
00:43:11
in this case I'm not entirely sure why
00:43:13
input tokens is zero but output tokens
00:43:15
is 50. We also have top P which is set
00:43:18
to one by default. We have temperature.
00:43:21
We have store. So this response is
00:43:23
logged on OpenAI servers. And also up
00:43:26
here we have the unique identifier for
00:43:29
this response. Later when we build a
00:43:31
chatbot, we'll use this to maintain a
00:43:33
conversation state. So the chatbot
00:43:35
remembers the conversation history. Now
00:43:38
back to the code, let's see how we can
00:43:40
enable streaming. To do that, first we
00:43:42
set stream
00:43:44
to true. Now to see this clearly, let's
00:43:47
increase max output tokens and set it to
00:43:50
250. Now when we set stream to true, we
00:43:54
no longer get a response object.
00:43:56
Instead, we get a stream object. So
00:43:58
let's rename this by pressing F2 to
00:44:01
stream.
00:44:03
Now this stream is what we call
00:44:07
an async iterable. What does it mean?
00:44:10
Well, an iterable is an object that we
00:44:12
can iterate over like an array. So here
00:44:15
we use a for loop and say for const
00:44:19
event of stream. So we are iterating
00:44:22
over the stream getting one event at a
00:44:25
time. Now I told you that this stream
00:44:27
object is an async iterable. So to
00:44:31
iterate over it we have to use the await
00:44:34
keyword because these events are
00:44:36
generated at runtime. So we don't get
00:44:38
them all in one hit. We get them as they
00:44:41
are being produced. Okay. So, we iterate
00:44:43
over the stream object and then log the
00:44:47
event on the console. Now, back to the
00:44:50
terminal. Let's rerun our program.
00:44:54
All right. Now, look, we're printing
00:44:55
these event objects as we are receiving
00:44:57
them. Let's scroll off and take a look
00:45:00
at a few of them. So, look, each event
00:45:03
has a type property. In this case, the
00:45:05
type is response.output text.ta. This is
00:45:09
the event that represents a chunk of
00:45:11
text or a token being generated at
00:45:13
runtime type. Each of these events has a
00:45:16
sequence number. So this is 252. Next
00:45:18
one is 253. Here we have a delta
00:45:21
property that is the token that is being
00:45:24
generated. So here's one token. Then we
00:45:26
have another token. So to print the
00:45:29
response in the terminal as it's being
00:45:31
generated, we just have to print the
00:45:33
delta property. So back to the code,
00:45:37
let's print event.ta
00:45:40
and rerun our program.
00:45:44
So these are the tokens that are being
00:45:45
generated. But the problem is at the
00:45:48
beginning and at the end we get a bunch
00:45:50
of undefined messages in the terminal.
00:45:53
The reason for that is because not all
00:45:55
events contain the delta property. The
00:45:57
delta property only exists in events
00:46:00
that contain a chunk of text being
00:46:02
generated. But we have other events that
00:46:04
represent the beginning and the end of
00:46:06
this operation. So the proper way to do
00:46:10
this is by checking if event delta is
00:46:14
defined then we print it on the
00:46:17
terminal. So let's rerun our program.
00:46:20
Okay, we're getting this tokens.
00:46:24
No more undefined messages. Beautiful.
00:46:27
But we don't want to print each token on
00:46:28
a new line. The reason this happens is
00:46:30
because console.log log always adds a
00:46:33
new line character at the end. So to
00:46:34
print these tokens next to each other,
00:46:37
we have to take a different approach. We
00:46:38
have to call process
00:46:41
std out which represents the standard
00:46:43
output or the console window. This
00:46:46
object has a right method. We call this
00:46:48
and pass the delta. Now let's rerun our
00:46:52
program.
00:46:54
Okay, this is the same experience we
00:46:56
have when we use chat GPT. So that
00:46:58
brings us to the end of this section. In
00:47:00
the next section, we'll start setting up
00:47:02
a modern full stack project. So, I will
00:47:04
see you in the next section.
00:47:11
Welcome back. Before we start building,
00:47:13
we need to set up a clean, modern, full
00:47:16
stack project. One we'll use as the
00:47:18
foundation for everything in this
00:47:19
course. Now, if you have done any
00:47:21
research, you have probably seen tons of
00:47:23
templates and GitHub starters for full
00:47:26
stack applications. And while some of
00:47:28
them are great, I decided not to use any
00:47:30
of them. I didn't want this course to
00:47:32
depend on someone else's setup or
00:47:34
introduce tools we haven't covered.
00:47:36
Also, we will not be using Nex.js. In
00:47:38
case you don't know, it's a powerful
00:47:40
full stack framework built on top of
00:47:42
React, but not everybody likes it and it
00:47:44
comes with its own learning curve. I
00:47:46
didn't want this to be a prerequisite
00:47:48
for this course. So, instead, we'll set
00:47:50
everything up from scratch using tools
00:47:52
like Bun, Vit, and Express. This gives
00:47:55
us full control. no hidden magic and a
00:47:58
setup that's easy to understand and easy
00:48:00
to scale. Along the way, we'll add
00:48:02
Tailwind for styling, set up SHA CN UI
00:48:05
for components, format our code with
00:48:07
prettier, and automate our workflow with
00:48:09
Husky. By the end of this section, we'll
00:48:11
have a solid full stack foundation
00:48:13
that's lightweight, clean, and fully
00:48:16
ours. Now, let's jump in and get
00:48:17
started.
00:48:20
Before
00:48:23
we start creating our project, there is
00:48:25
a tool I want to introduce that we'll
00:48:27
use throughout the course and that's
00:48:28
bun. If you haven't heard of it before,
00:48:31
bun is a modern JavaScript runtime kind
00:48:34
of like Node.js but faster and more
00:48:36
integrated. With Node, we typically rely
00:48:38
on multiple tools. We use npm to install
00:48:41
packages, TS Node to run TypeScript, and
00:48:45
something like Nodemon to restart the
00:48:47
server when we make changes. With Spun,
00:48:49
we get all of that in one tool. It's a
00:48:51
runtime, a package manager, a taskr
00:48:54
runner, and even a TypeScript transpiler
00:48:56
allin one. So, it can run TypeScript
00:48:58
files out of the box. And we don't need
00:49:00
to install a bunch of extra tools just
00:49:02
to get started. Now, if you're more
00:49:04
comfortable using Node, that's totally
00:49:05
fine. Everything I'm going to show you
00:49:07
can be done with Node as well. But if
00:49:09
you follow along with Bon, you will
00:49:11
probably find the experience cleaner and
00:49:13
honestly, a lot more enjoyable. So, head
00:49:15
over to bond.sh. And here on the
00:49:18
homepage, find the installation
00:49:20
instruction for your operating system.
00:49:23
Just copy this command and run it in a
00:49:26
terminal window.
00:49:28
Now follow the instructions in the
00:49:30
terminal. So here on Mac, we have to
00:49:31
execute this command. Exec/bin/zsh.
00:49:37
Now to verify that bon is installed
00:49:39
properly, we run bon-
00:49:42
version. So on this machine, I'm running
00:49:44
bun version 1.2.17.
00:49:47
In the next lesson, we'll talk about our
00:49:48
project structure.
00:49:51
[Music]
00:49:57
Now, to create our project structure, we
00:49:59
open a terminal window. We go to
00:50:01
somewhere on our machine. I'm going to
00:50:02
go to my desktop. Next, we create a
00:50:05
directory. Let's call it my app or
00:50:07
whatever you want. We cd into this
00:50:09
directory and run bun in it. This is the
00:50:13
same as npm in it. So, it creates a
00:50:16
package.json JSON file as well as some
00:50:18
additional files. Let's go ahead. First,
00:50:20
we answer this question to select a
00:50:22
project template. We have blank, react,
00:50:24
and library. Let's select blank. So,
00:50:28
this created a few files. We have a git
00:50:30
ignore file, a rule file for the cursor
00:50:33
editor, an index file, a TypeScript
00:50:35
configuration file, and a readme file.
00:50:38
It also installed TypeScript. So, now
00:50:41
let's open this with VS Code. So, here's
00:50:44
what we get. We have a directory for the
00:50:46
cursor editor. I'm not using cursor, so
00:50:49
it's safe to delete this.
00:50:52
We have our node modules directory, a
00:50:55
git ignore file, a bond lock file, an
00:50:58
index file, which is just a console lock
00:51:00
statement, our package json file just
00:51:02
like a node project, a readmi file, and
00:51:05
a typescript configuration file. Now to
00:51:08
set up a full stack project, we're going
00:51:10
to use something called a workspace,
00:51:12
which is a feature built into bun that
00:51:14
lets us manage multiple sub projects
00:51:16
like a client and a server application
00:51:18
from a single place. It's also available
00:51:20
in node projects. So by convention, we
00:51:23
put our sub projects inside a directory
00:51:25
called packages. So here we add a
00:51:28
directory called packages and then we
00:51:30
add a subdirectory for our client app
00:51:35
and one more for our server app.
00:51:39
Next we go to our package.json file and
00:51:41
declare our workspaces. So here we add a
00:51:44
new property
00:51:46
called workspaces and set it to an array
00:51:49
of strings. Here we type the path to our
00:51:52
sub packages. So we go to the packages
00:51:54
directory and grab the client directory
00:51:57
and one more for the server directory.
00:52:01
Now there's a shorthand syntax here. We
00:52:04
can replace client with an asterisk to
00:52:06
say all directories under packages
00:52:08
should be treated as workspaces. So
00:52:11
let's delete the second entry. This is
00:52:14
good. So our project structure is ready.
00:52:16
Over the next few lessons, we'll create
00:52:18
the client and server applications
00:52:20
independently. Now I also want to
00:52:22
initialize a git repository here. So
00:52:25
let's open a terminal window and run git
00:52:28
in it. Good. And make our first commit.
00:52:33
Initial commit.
00:52:36
Next we'll create our backend project.
00:52:39
[Music]
00:52:45
Now to create our server application
00:52:47
here in the terminal we go to the
00:52:49
packages/server
00:52:51
directory and run boninit one more time
00:52:54
to create the sub project in this
00:52:56
directory. We select blank.
00:52:59
Now back to our project here in the
00:53:01
server directory. We have the same files
00:53:04
you saw in the previous lesson. We don't
00:53:06
need the cursor directory. So let's get
00:53:08
rid of it and clean up our project.
00:53:11
Now back to the terminal. Next we shall
00:53:13
install express as our web server. In
00:53:16
node projects we run npm install or
00:53:18
npmi. In bun projects we run bun at. So
00:53:23
with node projects we have two different
00:53:25
tools. Node for running our code and npm
00:53:28
for installing dependencies. But in bun
00:53:30
projects all these features are
00:53:31
integrated into bun. So we run bun at
00:53:36
express.
00:53:38
Okay. We should also install express
00:53:40
types for TypeScript as a development
00:53:42
dependency. To do that, we run bun at-d
00:53:47
types/express.
00:53:51
Good. Now back to our project.
00:53:55
Look here in the server directory, we
00:53:56
have this package.json file. And in this
00:54:00
file, we have express as a dependency
00:54:03
and also express types as a development
00:54:06
dependency. So in this project so far we
00:54:09
have two separate package.json files one
00:54:13
in the root directory and one inside the
00:54:16
server directory. Later when we create
00:54:18
the client project we'll also have a
00:54:20
package.json file in our client
00:54:22
directory. Okay. Now what is interesting
00:54:25
about this structure is that in this
00:54:26
setup we don't have different node
00:54:29
modules in our server and client
00:54:31
applications. So we don't have a node
00:54:33
modules directory inside the server
00:54:35
directory. We only have one at the top
00:54:37
level where we have all the client and
00:54:40
server dependencies. So right now we
00:54:42
have express as well as all its
00:54:45
dependencies installed in this
00:54:47
directory. Okay. So we have installed
00:54:49
express. Now let's create a basic web
00:54:52
server. So we go to the server directory
00:54:55
and open index.ts.
00:54:57
Here on the top first we import the
00:55:00
express function from the express
00:55:02
module. We call this function
00:55:06
and get an object which we call app.
00:55:09
Next, we declare a constant called port.
00:55:12
We can initialize it from an environment
00:55:14
variable. To do that, we use
00:55:16
process.n.port.
00:55:19
So, if we pass port as an environment
00:55:21
variable, we can pick it up here. This
00:55:23
is useful in production environments.
00:55:25
But otherwise, if this environment
00:55:27
variable is not defined, we can give
00:55:29
this a default value of 3,000. Next we
00:55:32
define a route. So here we call app.get
00:55:35
and give it two arguments. A path like a
00:55:38
forward slash which represents the root
00:55:40
of our web server and a function that
00:55:43
gets executed when we receive a request
00:55:46
at this endpoint. This function should
00:55:48
have two arguments a request and a
00:55:50
response.
00:55:52
Here we use a lambda function and in
00:55:54
this function we just want to send the
00:55:56
hello world message to the client. So we
00:55:58
call response send and pass hello world.
00:56:03
Okay. Now so far we have been writing
00:56:06
plain JavaScript code. There is no
00:56:07
TypeScript here. But we can annotate
00:56:10
these arguments with types. So on the
00:56:12
top we can import the request and
00:56:17
response types from the express module.
00:56:20
And then we can annotate these
00:56:22
parameters with request and response.
00:56:28
Okay. So we have defined a route and a
00:56:30
route handler. Next we should start our
00:56:33
web server. So we call app.listen and
00:56:36
give it two arguments a port and a
00:56:39
callback function that gets executed
00:56:41
when the web server is up and running.
00:56:43
So again we use a lambda function. And
00:56:46
here we can say console.log.
00:56:49
Now I'm going to replace single quotes
00:56:51
with back tick. So we can use template
00:56:54
literal. So here we can say server is
00:56:57
running on http localhost port. Now here
00:57:02
we use a template literal. So we add a
00:57:04
dollar sign and curly braces to insert
00:57:07
port dynamically. Okay.
00:57:11
So that's pretty much it. Now to run
00:57:13
this, we go back to the terminal and
00:57:15
here in the server directory, we run bun
00:57:18
run index.ts.
00:57:21
Okay, our server is running. So if you
00:57:23
hold down command on Mac or control on
00:57:25
Windows and click this, it opens our web
00:57:27
browser. So our web server is set up
00:57:29
properly. Beautiful. Now instead of
00:57:32
running this command every time, we can
00:57:34
define a custom command like on start
00:57:36
just like we do with node projects. So
00:57:39
let's stop this process by pressing
00:57:41
control and C. Now we should go to our
00:57:43
package.json file. Here in VS Code, we
00:57:46
can hold down command on Mac or control
00:57:48
on Windows and press P to quickly find
00:57:51
files. If we type package.json, you can
00:57:53
see we have two files, one in the root
00:57:56
directory and one in the server
00:57:57
directory. So here in the server
00:58:00
package.json, we define
00:58:02
our custom scripts. So we add the
00:58:05
scripts property. Here's our first
00:58:08
script. We call it start and set it to
00:58:11
bun run index.ts.
00:58:14
We also define a custom script called
00:58:16
dev for running our application in watch
00:58:19
mode. So anytime we make changes to our
00:58:22
files, bun will automatically restart
00:58:24
our web server. We set this to bun run
00:58:28
index.ts just like before, but here we
00:58:30
use the watch option. Make sure to add
00:58:34
this right after bun and before run.
00:58:37
Okay, now back in the terminal, let's
00:58:40
test our commands. So, first we run bun
00:58:42
start because this is a built-in
00:58:44
command.
00:58:45
Okay, our web server is running again.
00:58:47
Let's verify it. Beautiful. Let's stop
00:58:50
this and try the other command. Now, dev
00:58:52
is a custom command. So, we cannot run
00:58:55
bundev. Instead, we should run bun
00:58:58
rundev.
00:59:00
Now, bun is watching our files. So if we
00:59:02
go to the index ts in the server
00:59:05
directory and make a small change, let's
00:59:08
remove the exclamation mark. This should
00:59:10
restart our web server. So if we go back
00:59:13
to the browser and refresh, the
00:59:15
exclamation is gone.
00:59:18
[Music]
00:59:24
Earlier in the course, I told you that
00:59:25
we shouldn't store API keys in the
00:59:27
source code. For example, here we don't
00:59:29
want to declare a constant like API key
00:59:32
and set it to whatever because with this
00:59:35
anyone who has access to our source code
00:59:36
can use this API key and we'll be the
00:59:39
one paying for it. So the right way to
00:59:41
manage API keys is using environment
00:59:43
variables. That's what I'm going to show
00:59:44
you in this lesson. So let's go back to
00:59:47
our terminal window and stop this
00:59:48
process. Now if you're on a Mac or
00:59:50
Linux, you can use the export command.
00:59:53
If you're on Windows, you should use the
00:59:55
set command. With these commands, we can
00:59:57
set an environment variable which is a
00:59:59
variable stored at the operating system
01:00:01
level. So we give it a name like open
01:00:03
AAI underline API underline key. Now by
01:00:07
convention, we use capital letters for
01:00:10
environment variables. So we set this to
01:00:12
a value like 1 2 3 4. Now back to
01:00:15
index.ts. Let's remove this line from
01:00:18
here. In this route handler, I want to
01:00:20
temporarily return our API key. To do
01:00:23
that, we use the process object. We go
01:00:26
to env or and access open AAI underline
01:00:30
API underline key. Make sure to spell it
01:00:33
properly. Now back to the terminal.
01:00:36
Let's restart our application
01:00:39
and go to the homepage. Refresh. This
01:00:42
verifies that we could successfully read
01:00:44
the environment variable in our
01:00:45
application. Beautiful. But there's a
01:00:47
problem with this approach. With this
01:00:49
approach, every time we want to start
01:00:51
our application, first we have to set
01:00:53
our environment variables. And this is
01:00:54
very tedious. So this is where we use a
01:00:56
library called env to streamline this
01:00:59
process. Let me show you how to do that.
01:01:01
First we stop this process. Next here in
01:01:04
the server package we install
01:01:08
env oren.
01:01:10
Next we go to our project and here in
01:01:12
the server directory we add a new file
01:01:15
called env.
01:01:18
Now if you look at this git ignore file
01:01:22
you can see that env files are by
01:01:25
default excluded from our git
01:01:27
repository. So in the future when we
01:01:29
push this repository to github the
01:01:31
variables that we declare here will not
01:01:33
be exposed to the public. So this is for
01:01:35
our private use here we can set open ai
01:01:38
underline api underline key to let's use
01:01:42
a different value like abcd to
01:01:44
differentiate from the former value. Now
01:01:47
to load this in our code, we go to
01:01:50
index.ts
01:01:51
on the top. First we import
01:01:55
object from env module. Then we call
01:01:59
env.config.
01:02:02
This should be the first line in our
01:02:03
module. What this does is it goes in
01:02:06
this file. It reads all the variables we
01:02:09
have declared here and stores them as
01:02:11
environment variables before running our
01:02:13
application. You will see that in a
01:02:15
second. So back to the terminal, let's
01:02:18
restart our application.
01:02:20
All right, look. Env is injecting
01:02:23
environment variables from our env file.
01:02:26
So back to the browser. Let's refresh,
01:02:29
but the value has not changed. What's
01:02:31
going on? Well, the variable that we
01:02:33
declared earlier in the terminal window
01:02:35
overwrites the variables that we have
01:02:37
declared in our env file. To fix this
01:02:41
issue, we have to remove the environment
01:02:43
variable we set earlier. So, let's stop
01:02:46
this. If you're on Mac or Linux, use
01:02:49
onset followed by the name of the
01:02:50
environment variable like Open AI
01:02:53
underline API key. If you're on Windows,
01:02:56
use the set command and set the
01:02:58
environment variable to an empty value.
01:03:02
Okay, now let's restart our application
01:03:06
and refresh the homepage. Okay, here's
01:03:09
the updated value. Beautiful. So now I'm
01:03:12
going to replace this with my actual API
01:03:14
key. So back to our env file, I'm going
01:03:17
to replace ABCD with a real API key.
01:03:20
Beautiful. Now there is a problem here.
01:03:23
The problem is that if we commit our
01:03:25
code to a Git repository, someone else
01:03:27
cloning this has no idea what
01:03:29
environment variables they should set.
01:03:32
So to help them here, we duplicate this
01:03:35
file
01:03:36
and rename it to env.example. Example,
01:03:41
this file is not going to be excluded
01:03:42
from our Git repository so others can
01:03:45
see what variables they should set. In
01:03:47
this file, we keep the variable names,
01:03:49
but we remove the actual values. Now,
01:03:52
the last part, back to index.ts.
01:03:58
Let's revert this code back and return
01:04:00
hello world to the client. So, we're
01:04:03
done setting up our API key. Now the
01:04:05
final step we make a commit and say
01:04:08
manage open AAI API key.
01:04:12
Now one more thing before we finish this
01:04:13
lesson. If you make any changes in this
01:04:15
env file you have to restart the
01:04:17
application. So bund will not detect the
01:04:19
changes here which means you have to go
01:04:21
to the terminal stop this process and
01:04:24
restart the application. So the new
01:04:26
values are injected into the environment
01:04:28
variables. Next we're going to create
01:04:30
our client project.
01:04:34
[Music]
01:04:39
Now to create our front end application,
01:04:41
we're going to use Vit. Vit is a very
01:04:43
popular build tool for front-end
01:04:44
applications. You have probably seen it
01:04:46
before. So here on vit.dev, let's go to
01:04:49
get started. On this page, you can see
01:04:53
the command for creating a new vit
01:04:55
project. So with npm, we run npm create
01:04:59
vit at latest. We also have support for
01:05:02
bun. Let's copy this command. Now back
01:05:05
to VS Code. Let's open a new terminal
01:05:08
window. So in this application, we're
01:05:10
going to have two terminal windows open.
01:05:12
One for the server, one for the client.
01:05:15
Now we can rename this for clarity. So
01:05:18
I'm going to rename this to server.
01:05:22
And the second terminal window to
01:05:24
client. We can also color code them if
01:05:27
you want. Change color. Let's make the
01:05:30
server green and the client yellow is
01:05:33
fine. Now let's go to the
01:05:34
packages/client
01:05:36
directory. Paste that command but don't
01:05:39
execute it yet because if you do so vit
01:05:42
will create a subdirectory here which is
01:05:43
not what we want. So here we add a
01:05:46
period which means create the front end
01:05:48
application in the current directory.
01:05:51
Let's go ahead. First we select our
01:05:53
framework which is going to be react.
01:05:55
Next we select a variant. We're going to
01:05:58
go with TypeScript. Okay. Now, back to
01:06:01
our project. Look here in the client
01:06:03
directory. We have a typical React
01:06:05
project created with Vit. There is
01:06:07
nothing magical here. In this directory,
01:06:09
we also have a package.json file. So,
01:06:13
currently we have three package JSON
01:06:15
files. One at the root, one for our
01:06:17
server application, and the other for
01:06:18
the client application. Now, back to the
01:06:21
terminal. The next step is to install
01:06:23
dependencies using bun. So we can run
01:06:26
bon install or bon i.
01:06:29
This installs all the dependencies
01:06:31
inside our top level node modules
01:06:35
directory. So once again we're not going
01:06:37
to have different node modules in client
01:06:39
and server applications. All right. Now
01:06:41
that the dependencies are installed, we
01:06:43
can run our application by running bun
01:06:45
rundev. So dev is a custom script that
01:06:48
is defined in our client package.json
01:06:51
file. Let's review that package.json
01:06:54
JSON in the client directory. Look, so
01:06:57
here we have these scripts, dev, build,
01:07:00
lint, and so on. So let's run our client
01:07:02
application. Beautiful. Now let's make
01:07:05
sure it's working properly. All right,
01:07:08
here's our React project. Lovely. So
01:07:10
let's wrap up this lesson by making a
01:07:12
commit. Create the front end.
01:07:18
[Music]
01:07:24
All right. Now, to connect our client
01:07:26
and server applications, we're going to
01:07:28
go to our server application and define
01:07:30
a new endpoint. So, we press command np
01:07:33
on Mac or control np on Windows and go
01:07:36
to index.ts, the one in the server
01:07:38
directory.
01:07:40
Now, we're going to grab this route
01:07:42
handler and duplicate it. Now, let me
01:07:44
show you a cool shortcut on the top
01:07:46
under the selection menu. Look, we have
01:07:49
this command copy line down. The
01:07:51
shortcut on Mac is option shift and
01:07:54
down. So with this line selected, if I
01:07:57
hold down shift, option, and press down,
01:08:00
these few lines get duplicated.
01:08:02
Now I'm going to change the path to /
01:08:04
API/hello.
01:08:07
And instead of returning plain text, I
01:08:09
want to return a JSON object. So let's
01:08:11
pass an object and give it a property
01:08:13
like message and set it to hello world.
01:08:17
Now before going further, let's test
01:08:19
this. So back to the browser here on our
01:08:22
server application. Let's send a request
01:08:24
to / API/hello.
01:08:27
Okay, this is our JSON object.
01:08:28
Beautiful. Now if this is not pretty
01:08:30
formatted on your machine, just install
01:08:33
this Chrome extension JSON formatter.
01:08:36
Now let's move on to the client part. So
01:08:39
we're going to go to app.tsx.
01:08:42
This is the container for our client
01:08:44
application. Now, we're going to delete
01:08:45
all the import statements. We should
01:08:48
also delete all the code inside this
01:08:50
function. But we are not going to
01:08:51
manually select these lines. I'm going
01:08:53
to show you another cool shortcut. So,
01:08:55
we put the cursor on the first line.
01:08:58
Now, on the top under the selection
01:09:00
menu, look at the shortcut for this
01:09:02
command. Expand selection. On Mac, it's
01:09:06
control shift command and right. So,
01:09:09
with the cursor here, I'm going to hold
01:09:11
down shift control and command. Now if I
01:09:14
press the right arrow the selection will
01:09:16
expand. If I press the left arrow the
01:09:19
selection will shrink. Take a look. So I
01:09:21
press the right arrow. Now this word is
01:09:24
selected. I press the right arrow again.
01:09:26
Now the entire line is selected. Let's
01:09:28
keep going. Now the entire body of this
01:09:30
function is selected but not the curly
01:09:32
braces. We can keep going. Now the
01:09:35
braces are selected. If we keep going
01:09:37
the entire function definition is
01:09:39
selected. Obviously that's not what we
01:09:41
want. So I'm still holding shift control
01:09:43
and command with my left hand. Now if we
01:09:46
press the left arrow, we can shrink the
01:09:48
selection and delete the code in this
01:09:51
function. Now here we're going to
01:09:53
declare a state variable. So we use the
01:09:56
state hook, initialize it to an empty
01:09:59
string, and call it message.
01:10:05
Now we're going to write a very basic
01:10:06
React code to make an API call. So we
01:10:09
use the effect hook. Now I know some
01:10:11
people are going to have a heart attack
01:10:12
saying this but this is just for a quick
01:10:14
demo. There are better ways to make API
01:10:16
calls. We'll look at that later in the
01:10:17
course. So here we use the fetch
01:10:19
function to send a request to /
01:10:21
API/hello.
01:10:24
Then when the promise is resolved we get
01:10:26
the response. We convert it to a JSON
01:10:28
object
01:10:30
and then we get the data and here we set
01:10:33
the message to data message.
01:10:38
Now as the second argument to the effect
01:10:40
hook, we pass an empty array as our
01:10:42
dependencies. So this code is executed
01:10:44
only once. This is just basic React
01:10:46
stuff. You should be familiar with this
01:10:48
concept. And finally, we return a
01:10:50
paragraph where we render our message.
01:10:54
Now if you run our application, this is
01:10:56
going to fail because the API endpoint
01:10:59
doesn't exist in our client application.
01:11:02
So if you go to our client application
01:11:04
and send a request to / API/hello,
01:11:06
obviously it's not going to work. This
01:11:08
is only available in our server
01:11:09
application.
01:11:11
So to solve this issue, we're going to
01:11:13
set up a proxy to automatically forward
01:11:15
all requests starting with /appi to our
01:11:19
server application. To do that, we go to
01:11:22
vit.config.ts.
01:11:26
In this object, we add a server
01:11:28
property.
01:11:29
Next, we set proxy to an object. And
01:11:32
here we map requests from / API to HTTP
01:11:38
localhost port 3000. And that means if
01:11:42
you send a request to let's say /
01:11:43
API/hello,
01:11:45
this will be automatically forwarded to
01:11:48
localhost port 3000/ API/hello.
01:11:53
Okay, so with that in place, let's test
01:11:56
our application. Back to the browser.
01:11:59
Let's go to our client application.
01:12:02
Refresh. There you go. We have the hello
01:12:03
world message, but it's displayed in the
01:12:05
center of the screen, which is not what
01:12:07
we want. So, back to VS Code. Let's go
01:12:10
to index.css
01:12:13
and delete all these styles. We'll come
01:12:16
back and work on styling in the future.
01:12:18
So, back to the browser. All right,
01:12:20
looking good. So, let's wrap up this
01:12:22
lesson by making a commit. And here we
01:12:24
say connect the front end and back end.
01:12:31
[Music]
01:12:37
With our current setup, every time we
01:12:39
want to start this application, we have
01:12:41
to open two terminal windows. One to
01:12:43
start the server using one rundev and
01:12:46
the other to start the client using the
01:12:49
same command. This is tedious. So in
01:12:51
this lesson, I'm going to show you a
01:12:52
simple way to start both applications
01:12:54
using a single command. First, we stop
01:12:57
the client and the server. Now we open a
01:13:02
new terminal window. Make sure this is
01:13:05
pointing to the root of the project.
01:13:07
Here we install a dependency as a
01:13:10
development dependency called
01:13:12
concurrently. Make sure to spell it
01:13:14
properly. With this library, we can
01:13:17
start multiple applications using a
01:13:19
single command. To do this, we go to
01:13:22
index.ts in the root directory. Now,
01:13:25
currently, we have a console lock
01:13:26
statement. Let's get rid of this.
01:13:28
Instead, we import the concurrently
01:13:31
function from the concurrently module.
01:13:35
We call this function and give it an
01:13:37
array of commands. Each command is for
01:13:40
starting one application. So, here's one
01:13:42
object here. We should set a few
01:13:44
properties. The first one is name. We
01:13:47
set this to server. Next is command. We
01:13:50
set this to bun rundev. This is the
01:13:54
command that we run. We're starting our
01:13:55
server application. Right. Next, we set
01:13:58
cwd or current working directory to
01:14:02
packages/server.
01:14:04
So, we want to run this command from
01:14:07
this directory.
01:14:09
Now optionally we can assign a prefix
01:14:11
color here because with this approach
01:14:13
we're going to have a single terminal
01:14:15
window. So to differentiate between
01:14:17
client and server messages we can assign
01:14:19
them different colors. So for server we
01:14:22
can use cyan.
01:14:24
Now let's duplicate this again. We put
01:14:27
the cursor somewhere here. Hold down
01:14:29
shift control command and press right to
01:14:32
expand selection.
01:14:34
Now the entire object is selected. Now
01:14:36
let's duplicate it with shift option and
01:14:39
down. Good. Now we add a comma here and
01:14:43
make a few changes. So this one is going
01:14:45
to be client. Again we run our client
01:14:48
application using the same command but
01:14:50
from a different directory. Also let's
01:14:53
change the color to green. Good. So with
01:14:56
this setup to start both applications we
01:14:59
should run index.ts from our root
01:15:02
directory. Now to simplify things, we're
01:15:04
going to go to the package.json file in
01:15:07
the root directory and define a custom
01:15:11
script here. So scripts,
01:15:14
we can define a dev script
01:15:17
and set it to bun run index.ts.
01:15:22
Now back to the terminal. So we're in
01:15:24
the root directory here. We run bun
01:15:26
rundev.
01:15:28
This started both applications. You can
01:15:30
see server messages are in cyan and
01:15:33
client messages are in green. Let's make
01:15:35
sure our setup is working. So back to
01:15:37
the browser, let's refresh. Beautiful.
01:15:41
So with this, we no longer need the
01:15:43
server and client terminal windows.
01:15:46
Let's clean things up and make a commit
01:15:49
to wrap up this lesson. Run both apps
01:15:52
together.
01:15:55
[Music]
01:16:01
All right, let's talk about styling our
01:16:02
application. To style our application,
01:16:04
we're going to use Tailwind CSS. Now, if
01:16:06
you haven't used Tailwind before, it's a
01:16:08
utility first CSS framework, which means
01:16:11
it gives us a bunch of small descriptive
01:16:13
CSS classes like flex, PT4, which is
01:16:16
short for padding, top four, text
01:16:19
center, and so on. So, we can use these
01:16:21
classes in our markup to style our
01:16:24
elements. With this, all the styling is
01:16:26
here. So, we don't go back and forth
01:16:28
between a CSS file and a CSS file. Now,
01:16:31
some people love this, others not so
01:16:33
much. But in this course, our focus
01:16:35
isn't on styling, it's on building AI
01:16:37
powered features. So, we'll keep styling
01:16:39
to minimum. And we'll use Tailwind
01:16:41
because it's a very popular framework.
01:16:43
And if you're not familiar with it, I
01:16:45
highly recommend to learn it because
01:16:46
it's something that comes up in job
01:16:48
descriptions a lot. So, head over to
01:16:50
tailwinds.com.
01:16:52
Now let's go to the documentation and
01:16:55
follow the installation instructions. So
01:16:58
we are using Vit. Let's see what we have
01:17:00
to do. First we have to create our
01:17:01
project which we have done so far. Next
01:17:04
to install Tailwind we have to install
01:17:06
two libraries. Now we're not going to
01:17:09
use npm. So let's grab these two
01:17:11
libraries.
01:17:13
Back to VS Code. Let's open a new
01:17:16
terminal window and go to
01:17:18
packages/client.
01:17:21
Next, we run bon add and paste these two
01:17:24
libraries.
01:17:26
All right, good. Now, back to the
01:17:28
documentation. The next step is
01:17:30
configuring the vit plugin. So, we're
01:17:33
going to go to vit.config.ts,
01:17:35
import tailwind CSS on the top, and add
01:17:39
it in the list of plugins. So, let's
01:17:41
grab this line,
01:17:43
copy it, and go to vit.config.ts.
01:17:50
We paste it here and then add it in the
01:17:53
list of plugins. This is a function so
01:17:56
we should call it. Okay. Next,
01:17:59
we should import tailwind CSS in our
01:18:02
root CSS file. So, let's grab this line
01:18:06
and
01:18:07
go to index.css.
01:18:11
Paste it here. I believe this is the
01:18:14
last step. So, that's it. Now we can
01:18:17
start our application and start
01:18:18
building. So let's go to app.tsx
01:18:24
and style this message. Here we set
01:18:26
class name. Let's set it to font bold.
01:18:29
And by the way, I highly recommend to
01:18:31
install Tailwind extension in VS Code to
01:18:34
get auto completion here. So here on the
01:18:36
extensions panel, search for Tailwind
01:18:39
CSS.
01:18:41
Okay, this is the extension I'm using.
01:18:43
Tailwind CSS IntelliSense 10 million
01:18:46
downloads. All right. So, let's try font
01:18:49
bold. Here's what we get. That looks
01:18:52
good. Now, let's give it some padding.
01:18:53
So, we can add in a padding of four,
01:18:56
which is equivalent to padding of one
01:18:59
ram. So, this utility class is a
01:19:01
container for this st padding of one
01:19:04
ram, which is 16 pixels. Now, take a
01:19:07
look. We have some padding around the
01:19:08
text. We can also make the text larger.
01:19:11
Let's try text 3x large.
01:19:16
Okay, that's better. So, that was the
01:19:18
basics of Tailwind. As we go through the
01:19:20
course, I'm going to show you some
01:19:21
additional features. So, let's wrap up
01:19:23
this lesson by making a commit and say
01:19:25
setup Tailwind CSS.
01:19:30
[Music]
01:19:36
All right, we set up Tailwind. Now,
01:19:37
we're going to set up a UI component
01:19:39
library to speed things up, and that's
01:19:41
Shatzen. In case you haven't used it
01:19:44
before, it's a collection of beautifully
01:19:45
designed, accessible, and customizable
01:19:48
components. Here on their homepage at
01:19:50
ui.shhatsen.com,
01:19:52
you can see various examples. They have
01:19:54
all these beautiful modern
01:19:57
customizable components that we can
01:19:59
easily add to our projects. So, let's
01:20:01
follow the documentation.
01:20:04
Let's go to the docs. First we select
01:20:06
our framework which is vit. Now we have
01:20:09
created our project. So let's move on.
01:20:12
Next we should add tailwind because
01:20:13
chaten is built with tailwind. We can
01:20:16
move on from this step. We did it in the
01:20:17
previous lesson. We also imported
01:20:19
tailwind in our index.css file. So now
01:20:23
we should modify our typescript
01:20:25
configuration file. Now in the current
01:20:27
version of vit projects we have three
01:20:29
typescript configuration files. We have
01:20:31
to modify two of them. One of them is
01:20:33
tsconfig.json. JSON. In this file, we
01:20:37
should add the compiler options. So,
01:20:40
let's select these few lines. Copy. Now,
01:20:44
back to VS Code. Here in the client
01:20:46
directory, look, we have three
01:20:49
TypeScript configuration files. Here's
01:20:51
the base one with base settings that is
01:20:53
shared between the others. We have one
01:20:56
for our React application or front- end
01:20:58
stuff and one for Node, which is used
01:21:00
for tooling. So, first let's go to
01:21:02
tsconfig.json.
01:21:04
and paste compiler options.
01:21:08
Now back to the documentation.
01:21:11
Next, we should modify tsconfig.app.json
01:21:14
which is used by our react app. In this
01:21:17
file, we already have the compiler
01:21:20
options property. So we should only
01:21:22
select base URL and paths. Let's copy
01:21:26
these. Now let's go to
01:21:29
tsconfig.app.json.
01:21:32
So here's the compiler options. We just
01:21:34
paste these two properties on the top.
01:21:38
Back to the documentation. Let's move
01:21:40
on. Next, we should update our V
01:21:42
configuration file. First, we have to
01:21:44
install node types. So, let's copy this
01:21:48
line. Back to our terminal. Make sure
01:21:51
you're in the client directory. Paste
01:21:53
it. Okay. Now, back to the docs. In our
01:21:57
V config file, first we have to import
01:22:00
path. We should also import tailwind CSS
01:22:03
which we did in the previous lesson. So
01:22:05
let's just grab the first import
01:22:07
statement. Copy it and go to vit
01:22:10
config.ts.
01:22:13
We paste it on the top. Back to the
01:22:15
docs.
01:22:17
Now while configuring vit we should add
01:22:19
these two plugins react and tailwind
01:22:21
which we did before. We just have to add
01:22:24
the resolve property. So copy this. And
01:22:29
I like to order these alphabetically. So
01:22:31
after plugins, I add Resolve.
01:22:35
Okay, we're almost there. Now back to
01:22:37
the docs. With this, we can run the
01:22:40
Shatian CLI. With this CLI, we can
01:22:43
easily add components to our project. So
01:22:46
we're going to use BONX, which is like
01:22:48
npx for running packages. Let's copy
01:22:51
this line. Back to the terminal again.
01:22:54
Make sure you're in the client
01:22:55
directory. Paste it.
01:22:58
All right. Chaten is asking what color
01:23:00
we want to use as our base color. We
01:23:02
have a few options. Neutral, gray, zinc,
01:23:05
stone, and so on. What is the
01:23:07
difference? Well, back to their website.
01:23:09
On the top, let's go to the colors page.
01:23:13
Look, this is an example of neutral
01:23:15
colors. Then we have stone, which has a
01:23:17
warmer tone. We have zinc, which has a
01:23:19
cooler tone, slate, gray, and so on. So,
01:23:23
that is the difference. I'm going to go
01:23:25
with neutral.
01:23:27
Now, this CLI created a file called
01:23:29
components.json, which keeps track of
01:23:32
the components we have installed. Let me
01:23:34
show you components.json.
01:23:37
It's just some internal stuff. We're not
01:23:39
going to modify this. This is used
01:23:40
internally by the CLI. Later, when we
01:23:43
install components, those components
01:23:45
will be listed here. Okay. Now, this CLI
01:23:48
also modified our index.css file. Let's
01:23:51
take a quick look. index CSS.
01:23:54
So in the previous lesson, we only added
01:23:57
Tailwind CSS. Now we have a bunch of
01:23:59
additional stuff for our theme. So all
01:24:02
the base variables for our colors,
01:24:05
padding stuff, they're all defined in
01:24:07
our CSS file and we can always modify
01:24:09
this in the future. Okay, so back to the
01:24:12
documentation. Let's move on to the next
01:24:14
step. Now you're ready to install
01:24:16
components. So if you go to the
01:24:20
components page, you can see all the
01:24:22
components. In this lesson, I'm going to
01:24:24
add a button, you can see a preview up
01:24:27
here. You can see an example in the
01:24:29
code. Now on the same page, you can see
01:24:31
the installation instructions. So we use
01:24:34
BONX again to use Shatn CLI to add this
01:24:37
button. Let's copy this line and run it
01:24:40
in the terminal.
01:24:44
Great. Now this button component is part
01:24:46
of our project. Take a look.
01:24:49
We have all the source code here. We can
01:24:51
make any changes we want. All these
01:24:53
classes are Tailwind classes. So, we can
01:24:55
customize this button to achieve the
01:24:57
look and feel we're looking for. Okay.
01:25:00
Now, let's see how we can use this. So,
01:25:01
we go to app.tsx.
01:25:04
First, let's wrap this expression in
01:25:07
parenthesis so we can break it down into
01:25:10
multiple lines. Now, below the
01:25:12
paragraph, I want to add a button. So,
01:25:14
we add a button component. This is
01:25:16
defined in components/ UI/button.
01:25:20
Let's import it. We give it the text
01:25:23
like click me. Now, because we have
01:25:25
multiple elements, we have to wrap them
01:25:27
inside a root element.
01:25:30
Okay, back to the browser. Here's what
01:25:33
we get. Beautiful. But the button is too
01:25:35
close to the edge of the screen. So
01:25:38
let's convert this to a div
01:25:42
and give it a class of P4. Now we can
01:25:47
remove P4 from our text. Now take a
01:25:50
look. Okay, that's better. So let's wrap
01:25:53
up this lesson. We make a commit and say
01:25:55
setup shaden UI.
01:26:00
[Music]
01:26:06
In this lesson, we're going to set up
01:26:07
Prettier. Prettier is a tool that
01:26:09
automatically formats our code, so we
01:26:12
don't have to think about things like
01:26:13
spacing, indentation, or semicolons.
01:26:16
That means fewer distractions, fewer
01:26:18
style debates, and code that's easier to
01:26:20
read for us or anyone else working on
01:26:22
this project. So, there are a number of
01:26:23
steps you have to follow. Pay close
01:26:25
attention to what I'm doing, even if you
01:26:27
have set up prettier before. First, we
01:26:29
go to the extensions panel and find the
01:26:32
prettier extension. If you haven't
01:26:35
installed it, go ahead and install it.
01:26:37
Next, we're going to define our styling
01:26:40
rules. Now, by default, Predier comes
01:26:42
with its own rules, but we can override
01:26:44
them by creating a prettier rc file. So,
01:26:47
here in the root of our project, we add
01:26:49
a file called prettier rc. Make sure to
01:26:55
spell it properly.
01:26:57
Here we add a JSON object where we
01:26:59
define our formatting rules. Now, there
01:27:01
are a number of settings you can
01:27:02
customize. You can look at the prettier
01:27:04
documentation, but at a minimum, you
01:27:06
want to set single quote to true. This
01:27:10
is my personal preference. I prefer
01:27:12
single quotes to double quotes. And by
01:27:14
that, I'm talking about JavaScript code,
01:27:16
not JSON because in JSON, we cannot have
01:27:20
single quotes like this. Okay? So, we're
01:27:22
going to use single quotes in this
01:27:23
project. Next, we set semi to true. That
01:27:27
will terminate our lines with a
01:27:28
semicolon. Again, my personal
01:27:30
preference. Next, we set trading comma
01:27:34
to ES5. This setting controls whether or
01:27:37
not Prettier adds a comma at the end of
01:27:39
the last item in things like arrays,
01:27:42
objects, or function arguments when they
01:27:44
are written on multiple lines. Now, if
01:27:46
we set this to ES5, this tells Priier to
01:27:49
add trailing commas where valid in ES5.
01:27:52
That means objects, arrays, functional
01:27:54
arguments, but not in function
01:27:56
definitions or inline arrow functions.
01:27:59
Okay. The next setting we're going to
01:28:01
set is print width. The default is 80.
01:28:04
We keep that.
01:28:06
And also tab width, which is the number
01:28:08
of spaces for indentation. The default
01:28:10
is two. I'm going to change it to three.
01:28:13
Now, to see this in action, let's go to
01:28:16
app.tsx.
01:28:18
Now, in this file, we're using single
01:28:19
quotes. I'm going to change these single
01:28:22
quotes to double and also remove the
01:28:25
semicolon. Now, we're going to go to the
01:28:27
command pallet. We can find it under the
01:28:30
view menu. The shortcut is shift command
01:28:34
and P on Mac. Here search for format
01:28:37
document. Now the first time you execute
01:28:39
this command, VS code will ask you about
01:28:41
the default formatter. Select prettier.
01:28:44
Once you do that, format your code. Now
01:28:47
look, the double quotes are replaced
01:28:49
with single quotes. We have a semicolon
01:28:51
and we have consistent tab width. Now we
01:28:54
can also configure VS Code to
01:28:56
automatically format our code whenever
01:28:58
we save our files. To do that we go to
01:29:00
the settings. So settings the shortcut
01:29:04
is command and comma on Mac.
01:29:08
Here search for format on save.
01:29:11
Make sure it's enabled with this. If you
01:29:14
remove this semicolon but save this file
01:29:17
prettier automatically formats this
01:29:18
code. Now we can also format our code
01:29:20
from the command line. And this is
01:29:22
useful before committing our code to git
01:29:24
and sharing it with others. To do that,
01:29:26
first we have to install Predier as a
01:29:29
development dependency. So open a
01:29:32
terminal window, a new terminal window
01:29:34
pointing to the root of our project.
01:29:36
Let's run bon add-er.
01:29:40
We add this to the root of our project
01:29:42
because we don't want this to be
01:29:43
specific to the client or the server
01:29:45
application. Okay, let's install this.
01:29:49
Next, we go to the package.json JSON
01:29:50
file in the root of our project.
01:29:53
In this file, we have a script or a
01:29:56
command for running both applications.
01:29:58
Next, we're going to define a command
01:30:00
for formatting the entire codebase. We
01:30:03
set this to prettier d-right
01:30:06
period, which means start from the
01:30:08
current directory. Now, as part of
01:30:10
formatting our files, we don't want to
01:30:12
format third party code stored in the
01:30:15
node modules directory.
01:30:18
So here we're going to add a new file in
01:30:21
the root of our project called predier
01:30:25
ignore. This is similar to git ignore.
01:30:28
In this file we can list all the files
01:30:30
or directories that should be ignored by
01:30:32
predier. So at a minimum we add node
01:30:34
modules and also it's good to add
01:30:37
bun.lock. This is a lock file used by
01:30:39
bun for installing dependencies. We
01:30:41
should never touch this. With this in
01:30:43
place, now we go back to the terminal in
01:30:45
the root of our project and run bun run
01:30:48
format.
01:30:50
All right. So this formatted all the
01:30:52
files in this project. With this we can
01:30:55
wrap up this lesson and say setup
01:30:58
prettier.
01:31:04
[Music]
01:31:08
In the last lesson, I showed you how we
01:31:10
can format our code before committing it
01:31:12
to Git. And I told you that this is a
01:31:14
good practice to follow before
01:31:16
committing our code to Git and sharing
01:31:18
it with others. But there's a problem.
01:31:20
What if we forget to run this command
01:31:22
before we make a commit? This is where
01:31:24
Husky comes in. With Husky, we can
01:31:26
automate our Git workflow. So, we can
01:31:28
run certain commands like formatting our
01:31:31
code or running tests before committing
01:31:33
or pushing our code. To get started, go
01:31:36
to typicode.github.io.
01:31:38
io/husky.
01:31:40
Now let's go to the get started page.
01:31:43
First we have to install husky as a
01:31:45
development dependency. So let's copy
01:31:47
this command back to the VS code. Open a
01:31:50
terminal window pointing to the root of
01:31:52
the project. Let's run this command.
01:31:54
Good. Back to the documentation. Next we
01:31:57
have to initialize husky. What this does
01:32:00
is it creates a pre-comit script in the
01:32:03
husky directory. I'll show you that in a
01:32:05
second. So let's run bonex husky in it
01:32:09
in the terminal window.
01:32:12
Good. Now back to our project. So here
01:32:16
we have the husky directory. In this
01:32:18
directory we have this pre-commit
01:32:20
script. In this file we can add any
01:32:23
commands that should be executed before
01:32:25
committing our code. By default it tries
01:32:27
to run one test for running our tests
01:32:30
because this is a good practice to
01:32:31
follow. But in this project we don't
01:32:33
have any tests. So instead we should run
01:32:36
bun run format. But there's a problem
01:32:39
here with this command we'll format our
01:32:41
entire codebase. And that means as our
01:32:44
project gets larger as we add more files
01:32:46
this operation is going to get slower.
01:32:48
But there's a second problem. If we have
01:32:49
worked on a certain feature and modified
01:32:51
only let's say two files this command
01:32:54
will potentially format other files that
01:32:56
are not formatted and we'll put them in
01:32:58
our commit which can be misleading. So
01:33:00
later when we look at our git history,
01:33:02
if you open a commit, we'll see a bunch
01:33:04
of files that are modified just as a
01:33:06
result of formatting. So instead of
01:33:08
formatting the entire codebase, we
01:33:10
should format only the staged files. To
01:33:13
do that, we're going to use a separate
01:33:15
library called lint staged. With this
01:33:17
library, we can execute tasks on staged
01:33:20
files. So back to the terminal, let's
01:33:23
install lintstaged as a development
01:33:26
dependency. lint dash staged.
01:33:31
All right, good. Now, back to our
01:33:33
pre-commit script. We're going to
01:33:34
replace this command with bonex lint
01:33:37
dash staged. So, when we're going to
01:33:40
make a commit, we'll run lint staged.
01:33:43
Next, we should tell lintstaged what
01:33:45
task to perform on what files. To do
01:33:47
that, we go in the root of our project
01:33:50
and add a new file
01:33:53
called lintstagedrc.
01:33:57
Make sure to spell it properly. In this
01:33:59
file, we add a JSON object where keys
01:34:02
are file patterns like we can say any
01:34:04
file with any name but with one of these
01:34:07
extensions. So in braces we add JS, JSX,
01:34:11
TS, TSX and CSS.
01:34:14
Now for the value we specify the command
01:34:17
that should be executed for stage files
01:34:19
that match this pattern. In this case
01:34:21
we're going to run prettier d-right. Now
01:34:25
earlier we added a period but we're not
01:34:27
going to use that here because this will
01:34:28
format the entire codebase starting from
01:34:31
the current directory but in this case
01:34:33
we only want to write or format files
01:34:35
that match this pattern. Okay so to
01:34:38
recap next time we're going to commit
01:34:40
husky will run the pre-commit script in
01:34:43
this script we're running lint staged
01:34:46
lint stage will look at this file it
01:34:48
figures out that it should run prettier
01:34:50
on files that match this pattern. Okay.
01:34:52
Now, to see this in action, let's go to
01:34:56
app.tsx
01:34:58
and make a few changes. First, I'm going
01:35:00
to add an exclamation mark here, but I'm
01:35:03
going to also mess up with formatting
01:35:07
and also remove this semicolon. Now, I'm
01:35:10
not going to save this file because if I
01:35:12
do so, Predator will automatically
01:35:13
format this file. In my VS Code, I have
01:35:16
set up autosaving. So under file menu we
01:35:19
have autosave enabled which means the
01:35:22
file is saved but it's not automatically
01:35:24
formatted. Formatting only happens when
01:35:26
we explicitly save this file. Okay. Now
01:35:30
let's go and make a commit. So here we
01:35:32
can say setup husky.
01:35:35
Now there is a problem at the time of
01:35:37
this recording. I believe this is a
01:35:38
temporary issue with VS Code. Hopefully
01:35:41
that doesn't happen when you're watching
01:35:42
this video or maybe this is a problem
01:35:44
with my setup. So when I make a commit,
01:35:47
look, we get this error saying bonex
01:35:50
command not found. Here's what's
01:35:52
happening. So VS code is complaining
01:35:54
that the bonex command that we have
01:35:56
referenced in our pre-commit script
01:35:58
cannot be found. Now this doesn't happen
01:36:00
if we make a commit from the terminal
01:36:02
window. So one way to solve this is by
01:36:06
adding all the files here and then
01:36:08
making a commit. Let's say setup husky.
01:36:13
Okay, the commit is done and now we can
01:36:16
verify that our app tsx is formatted. So
01:36:19
let's go to app.tsx.
01:36:21
Look, the file is beautifully formatted
01:36:23
and we have this semicolon here. But
01:36:25
what if this happens on your machine and
01:36:27
you don't want to commit from the
01:36:28
terminal? The solution for that is to
01:36:30
replace bonex with npx. So we go to our
01:36:34
pre-commit script and replace bonex with
01:36:39
npx. I know this is not ideal. is kind
01:36:41
of like a hack because we decided to use
01:36:43
bun for the entire project. But if you
01:36:45
really like the source control panel in
01:36:47
VS Code and prefer to make commits this
01:36:49
way, you have to replace bonex with npx.
01:36:52
Let's make sure this works. So back to
01:36:54
appsx again, let's make a change here.
01:36:58
Remove the semicolon and mess up with
01:37:01
formatting. Let's make a commit. I'm
01:37:04
going to say test husky.
01:37:07
Okay, no problem. But I'm not going to
01:37:09
keep this commit in our history. So back
01:37:12
to the terminal. Let's run git log d-
01:37:15
one line. So here's our commit history.
01:37:18
Now the head pointer is pointing to this
01:37:20
commit test husky. We want to get rid of
01:37:23
it and have the head point to this
01:37:26
previous commit. So the way we do that
01:37:29
is by running get reset d- hard. Then we
01:37:33
get the head pointer and go one step
01:37:36
back. Let's go ahead. Good. Now let's
01:37:39
verify that everything looks good. Get
01:37:41
log one line. Now the head pointer is
01:37:45
pointing to this commit. Beautiful. So
01:37:47
we're done with this section. In the
01:37:49
next section, we'll start building our
01:37:50
first project.
01:37:53
[Music]
01:37:55
Chatbots are everywhere now. They're
01:37:57
becoming a core part of modern
01:37:59
applications. So in this section, we're
01:38:01
going to build one together from
01:38:02
scratch. At first glance, it looks
01:38:04
simple. A text box, a send button, and a
01:38:06
list of messages. But behind the scenes,
01:38:09
there's a lot going on. There's subtle
01:38:11
UX details, state management challenges,
01:38:13
and edge cases that are easy to overlook
01:38:15
if you haven't built one before. So,
01:38:17
grab yourself a cup of coffee and let's
01:38:19
get started.
01:38:24
Now, this section is a little bit
01:38:25
longer, so I've broken it down into two
01:38:28
segments. In this segment, we'll start
01:38:30
by building the backend for chatbot.
01:38:33
First, we'll create a basic API that
01:38:35
receives a message and returns a
01:38:36
response from an AI model. Once that's
01:38:39
working, we'll gradually improve it by
01:38:41
adding input validation, error handling,
01:38:43
and making sure it's robust and clean.
01:38:46
Then, we'll reorganize our code to keep
01:38:48
things modular and easy to maintain. By
01:38:50
the end of this segment, we'll have a
01:38:52
fully functional production ready
01:38:53
backend ready to plug into our front
01:38:55
end. So, let's jump in.
01:38:58
[Music]
01:39:04
In this lesson, we're going to build a
01:39:06
simple API endpoint that receives a
01:39:08
message from the user and returns a
01:39:10
response. To get started, first we open
01:39:12
a new terminal window and go to
01:39:14
packages/server
01:39:17
and install OpenAI. So, one addi.
01:39:22
Good. Next, we go to index.ts in our
01:39:25
server application.
01:39:28
on the top. First we import open AAI
01:39:31
from OpenAI. Once we run config, then we
01:39:35
create a new instance of OpenAI with our
01:39:37
API key. So let's declare a constant
01:39:40
called client and set it to new OpenAI.
01:39:43
Here we set API key to we go to process
01:39:47
environment and grab open AI underline
01:39:51
API underline key.
01:39:54
Okay. Next, we define a new endpoint for
01:39:58
receiving prompts from the user. So, we
01:40:01
call app.post.
01:40:04
Now, in this case, we're not going to
01:40:05
use that get method because we're not
01:40:07
just getting information, we're
01:40:08
submitting data to the server. So, we
01:40:10
have to send an HTTP post request to
01:40:12
this endpoint. Now, for the path, let's
01:40:15
go with API/ chat. Next, we add a
01:40:19
request handler. So request
01:40:22
and response.
01:40:26
Now in this function first we should
01:40:28
grab the user's prompt from the request.
01:40:31
So this request object has a body
01:40:33
property. Let's say in the object that
01:40:35
we sent to the server we have a property
01:40:37
like prompt. We get that and store it in
01:40:40
a constant. A cleaner way is to use
01:40:43
dstructuring. So instead of accessing
01:40:45
the prompt property, we grab request
01:40:48
body and dstructure it to grab the
01:40:51
prompt property. That's cleaner. Next,
01:40:54
we send this to OpenAI. So we call
01:40:56
client responses.create.
01:41:00
We pass an object. First, we set the
01:41:03
model. Now what model are we going to
01:41:04
use? Back to the OpenAI website. Look
01:41:07
here on the models page. We have
01:41:09
different categories of models. We have
01:41:11
reasoning models which are used for
01:41:13
solving complex multi-step tasks like
01:41:15
coding problems. We have another
01:41:17
category flagship chat models. They're
01:41:20
highly intelligent chat models. Then we
01:41:22
have costop optimized models. These are
01:41:24
smaller and faster. We also have
01:41:27
research models, runtime models, image
01:41:30
generation models, and so on. For a
01:41:32
chatbot, we can go with one of these
01:41:35
costs optimized models because we want a
01:41:37
small model that can quickly respond to
01:41:39
the user's queries. We don't need
01:41:40
reasoning. We don't need to solve
01:41:42
complex problems. So for comparison,
01:41:45
let's compare a couple of these in terms
01:41:47
of their performance and price. I'm
01:41:49
going to compare 04 Mini with GPT 40
01:41:53
Mini. So on the top, let's compare
01:41:56
models. Here we have 04 Mini. Let's also
01:41:59
compare GPT-40
01:42:03
Mini. Now Mini has various flavors. We
01:42:06
have mini audio, realtime, and so on.
01:42:08
We're going to grab this base model. Now
01:42:11
compare these two models. So for OMI is
01:42:16
the fastest of all these models. It's a
01:42:18
multimodal model. So in the input we can
01:42:20
pass text and image. Now compare the
01:42:23
prices for this model. The price of 1
01:42:26
million input tokens is 15 cents. Now
01:42:28
compare that to this other model. It's
01:42:31
10 times more. So it makes more sense to
01:42:34
use this model for our chat application.
01:42:37
Now, what about context window? The
01:42:40
context window of this model is 128,000
01:42:43
compared to this other model. But for a
01:42:45
chatbot, let's say this is going to be a
01:42:47
customer assistant. Usually, we don't
01:42:49
have long conversations. We don't need a
01:42:51
very large context window. Users come
01:42:54
ask a few questions and move on. So, I
01:42:56
believe 128,000 tokens is a good size
01:42:59
for the context window. So, back to the
01:43:01
code. Let's set the model to GPT-40-
01:43:05
mini.
01:43:07
Next, we set input to users prompt. It's
01:43:11
good to set a temperature. Now, in the
01:43:13
chatbot, responses should be accurate
01:43:15
and consistent. We don't need creativity
01:43:17
here. So, we should stick with lower
01:43:20
temperatures somewhere between 2 to 4.
01:43:23
I'm going to go with 02, but we can
01:43:25
always modify this in the future.
01:43:26
There's no hard and fast rule. This is
01:43:28
more an art than science. We have to
01:43:30
test different temperatures to see what
01:43:32
kind of responses we like more. Now,
01:43:34
it's also good to set max output tokens,
01:43:37
otherwise the responses are going to be
01:43:39
long. But in a chat application for a
01:43:41
chatbot, we should have relatively short
01:43:43
responses. So, I'm going to set this to
01:43:46
100 tokens. And again, we can always
01:43:48
come back and adjust this. So, we call
01:43:51
this method. We await the call and get
01:43:54
the response. Because we are using
01:43:57
await, we have to mark this function as
01:44:00
async. So, we have the response. The
01:44:03
final step is to return a JSON object to
01:44:05
the client. So we call response.json.
01:44:09
We pass an object. And here we add a
01:44:12
property like message. We set it to
01:44:13
response output text. We're almost done.
01:44:18
There's just one step missing. Look in
01:44:20
this function. We're extracting the
01:44:22
prompt from the request body. By
01:44:24
default, this is not going to work
01:44:26
unless we tell Express to automatically
01:44:28
parse JSON object from the request body.
01:44:30
The way we do that is by adding a
01:44:32
middleware function. So on the top once
01:44:35
we create an app we call app dot use.
01:44:41
Here we call express.json.
01:44:44
This returns a middleware function that
01:44:46
gets executed before passing that
01:44:48
request to our request handler. So in an
01:44:51
express application we can have one or
01:44:53
more middleware functions. These
01:44:55
middleware functions can be used for
01:44:56
parsing request data, for enforcing
01:44:59
security rules, for login and so on. So
01:45:01
once we install the JSON middleware,
01:45:04
we'll be able to access request.body
01:45:06
otherwise this is going to be undefined.
01:45:09
In other words, the JSON middleware gets
01:45:11
executed before our request handler. It
01:45:14
parses the JSON object in the request
01:45:15
body and stores it in request.body.
01:45:19
We're done with our first step. Next,
01:45:21
I'm going to show you how to test this
01:45:22
endpoint. All
01:45:24
[Music]
01:45:30
right. Now to test our API endpoint,
01:45:32
let's go to the extensions panel and
01:45:34
search for Postman.
01:45:37
This is a very useful extension for
01:45:39
testing API endpoints. It's also
01:45:41
available as a standalone application.
01:45:43
In the past, I used to use the
01:45:45
application, but recently I've been
01:45:47
using the extension more. It's kind of
01:45:48
more convenient. So, let's go ahead and
01:45:51
install this. Once you do this, go to
01:45:54
the command pallet. You can find it
01:45:56
under the view menu. The shortcut is
01:45:58
shift command and P on Mac and probably
01:46:01
shift control P on Windows. Here, search
01:46:04
for show Postman.
01:46:07
You get this panel. The first time you
01:46:09
have to create an account and sign in. I
01:46:11
know it's a pain in the neck, but trust
01:46:12
me, it's completely worth it. It only
01:46:14
takes a minute. With this, we can save
01:46:16
your HTTP requests in your account and
01:46:18
share them across your different
01:46:20
machines, so you don't have to recreate
01:46:22
them every time. But you can also share
01:46:23
your requests with other members in your
01:46:25
team. It's very convenient. We're going
01:46:28
to create a new HTTP request. We're
01:46:30
going to send a post request to HTTP
01:46:35
localhost port 3000/ API/ chat. Next, we
01:46:40
go in the body tab, select raw for the
01:46:43
type of data we want to send. And from
01:46:46
this drop-own list, we select JSON.
01:46:49
Now, here we add a JSON object to send
01:46:51
to the server. So, we add a JSON object.
01:46:54
We give it a property called prompt and
01:46:56
set it to let's say what is the capital
01:46:59
of France. Let's send this request.
01:47:04
All right, we got a response with a
01:47:05
status of 200. Now, let me put this side
01:47:08
by side so you can see clearly. We can
01:47:10
toggle the view mode. So, here's our
01:47:13
request and here's our response. We have
01:47:14
a message saying the capital of France
01:47:16
is Paris. Beautiful. So, our API is
01:47:19
working. Let's move on to the next
01:47:20
lesson.
01:47:22
[Music]
01:47:28
Right now, our chatbot doesn't have a
01:47:30
memory. So, if you ask a follow-up
01:47:32
question, it doesn't remember our
01:47:33
previous questions. Let me show you. So
01:47:36
here I'm going to change the prompt to
01:47:39
what was my previous question.
01:47:42
Let's see what it says.
01:47:45
It says I can't access previous
01:47:47
interactions or questions. So how can we
01:47:49
solve this? Well, one very basic way to
01:47:51
solve this is by declaring a global
01:47:53
variable for keeping track of the last
01:47:56
response ID. This is a temporary
01:47:58
solution. We're going to do things step
01:47:59
by step. So outside of our route
01:48:03
handler, we declare a global variable
01:48:06
like last response ID. Now in terms of
01:48:10
the type, this can be either a string if
01:48:13
we have a valid response ID or null. The
01:48:17
first time we're going to initialize
01:48:18
this to null. Now every time we get a
01:48:21
response from OpenAI, we update
01:48:24
last response ID, we set it to response
01:48:28
ID. Now when calling the create method
01:48:32
in this options object, we can pass
01:48:35
previous response ID to establish a
01:48:39
conversation history. So let's set this
01:48:42
to last response ID and test our API
01:48:44
again. Back to Postman. Let's start by
01:48:48
asking what is the capital of France.
01:48:54
It says the capital of France is Paris.
01:48:56
Great. Now let's ask what was my
01:49:00
previous question.
01:49:04
It says your previous question was about
01:49:06
the capital of France. Great. So we have
01:49:08
built memory into this chatbot. But
01:49:10
there is a problem. Back to our code.
01:49:13
With this global variable, we can only
01:49:16
keep track of the last response ID for
01:49:18
one conversation. But in a real
01:49:20
application, we can have multiple users
01:49:23
and each user can have multiple
01:49:25
conversations. So the right way to
01:49:27
address this is by using a map or a
01:49:29
dictionary. So instead of one global
01:49:32
variable, we declare a map. Let's call
01:49:35
it conversations.
01:49:37
We set it to a new map that here we
01:49:39
specify the type of keys and values. I'm
01:49:42
going to go with string and string. I
01:49:45
will explain what it means in a second.
01:49:47
So in this dictionary or in this map,
01:49:49
we're going to map conversation ids to
01:49:52
last response ID in that conversation.
01:49:56
For example, we might have a
01:49:57
conversation conversation one and the
01:49:59
last response ID in that conversation
01:50:01
can be 100. Similarly, we can have
01:50:04
another conversation and in that
01:50:06
conversation in that thread, the last
01:50:08
response ID might be 200. So we're going
01:50:11
to replace this single global variable
01:50:14
with a map.
01:50:16
Now in our route handler first we should
01:50:20
get the conversation ID from the body of
01:50:22
the request. So
01:50:25
let's grab it from this object. Let's
01:50:28
call it conversation ID. Now this is the
01:50:30
same experience we have in chat GPT. For
01:50:32
example, when we ask a new question,
01:50:35
let's say what is the capital of France?
01:50:38
Look what happens in the URL.
01:50:42
This client application created a Gwid
01:50:45
or a globally unique identifier to
01:50:47
represent this conversation. So the
01:50:49
client is sending the conversation ID to
01:50:51
the server. Now back to our code. We
01:50:54
have the conversation ID. Once we get a
01:50:57
response, we should update the last
01:50:59
response ID in that conversation. To do
01:51:02
that we call conversations
01:51:05
set as the key we provide conversation
01:51:08
id and as the value we provide response
01:51:13
do ID
01:51:15
and also when setting the previous
01:51:18
response ID we should get the last
01:51:20
response ID of the current conversation.
01:51:23
So we call conversations that get we
01:51:26
pass conversation ID and get the last
01:51:30
response ID. Let's test this. So back to
01:51:33
Postman, I'm going to start with what is
01:51:37
the capital of France.
01:51:42
It says the capital of France is Paris.
01:51:44
Now I forgot to pass the conversation
01:51:46
ID. So let's add conversation ID here.
01:51:51
Now we can use a grid or just a simple
01:51:53
string like con one. Let's start again.
01:51:57
Okay. Now in the same conversation I'm
01:52:00
going to ask what was my previous
01:52:03
question.
01:52:06
Now it's not updating. This is a glitch
01:52:08
with Postman extension. I don't know if
01:52:10
it happens on your machine or not. But
01:52:12
if you go to the raw tab, you can see
01:52:14
the updated response. Sometimes it
01:52:16
doesn't update on the pretty tab. So if
01:52:18
that happens you can simply close this
01:52:20
window and reopen it. Alternatively you
01:52:23
can use the raw or preview tabs. So here
01:52:25
it says your previous question was about
01:52:27
the capital of France. Great. Now let's
01:52:30
open a new conversation and in that
01:52:32
conversation ask what was my previous
01:52:34
question. Let's see what happens.
01:52:38
Again the pretty tab is not updating. So
01:52:39
let's go to the raw tab. It says I can't
01:52:42
access previous questions or
01:52:43
conversations. Great. Let's ask a
01:52:45
different question here and say what is
01:52:48
after one.
01:52:52
After one, the next number is two. Now
01:52:55
let's repeat. What was my last question?
01:53:01
It says your last question was what is
01:53:03
after one? Now if you ask the same
01:53:06
question but go to a different
01:53:07
conversation,
01:53:10
it says your last question was what was
01:53:13
my previous question? So it's properly
01:53:15
keeping track of the conversation
01:53:16
history. Now back to our code. So using
01:53:19
a map, we can keep track of the last
01:53:23
response in each conversation. Now in
01:53:26
this implementation, we are storing
01:53:27
these values in memory. In a real
01:53:29
application like chat GPT, we should
01:53:31
store these values in the database. But
01:53:33
that's more complicated. We're not going
01:53:35
to do any database work in this project
01:53:37
because we just want to focus on
01:53:38
foundations. Later in the course, we
01:53:40
have another project that involves some
01:53:42
database work.
01:53:44
[Music]
01:53:50
In the last lesson, we assumed that
01:53:51
everything would go smoothly. But in a
01:53:53
real world application, we can't rely on
01:53:55
that. We need to make sure that the
01:53:57
request body contains valid data. More
01:54:00
specifically, we want to make sure that
01:54:02
prompt is a string between 1 and 1,000
01:54:05
characters. And conversation ID is a
01:54:07
valid Gwid or globally unique
01:54:10
identifier. just like the GUID we have
01:54:12
here on chat GPT. So how can we
01:54:15
implement these validation rules? This
01:54:17
is where we use Zot. Zot is a very
01:54:20
popular data validation library used in
01:54:22
React applications. So let's open a
01:54:25
terminal window here in the server
01:54:27
directory. Let's add Zot.
01:54:31
All right, good. Now with Zot, we can
01:54:33
define the shape of our objects like
01:54:35
incoming request data and easily
01:54:37
validate them. So let's go to index.ts.
01:54:42
First on the top we import Z from zot.
01:54:48
Now down here
01:54:52
outside of our route handler, let's
01:54:55
declare a constant called chat schema.
01:54:59
We set it to Z.Object.
01:55:02
And here we pass an object for defining
01:55:04
the shape of our incoming request data.
01:55:06
So in the request we want to have a
01:55:08
property called prompt.
01:55:11
This should be a string. So we set it to
01:55:13
Z dot string. Now here we can chain
01:55:16
various methods for defining validation
01:55:18
rules. For example, we can call min to
01:55:21
specify a minimum length. Let's say a
01:55:23
minimum of one character. Now here
01:55:26
optionally we can provide a custom error
01:55:28
message like prompt is required.
01:55:32
Now, we can chain additional methods for
01:55:34
defining additional validation rules.
01:55:36
For example, we can apply a max length
01:55:38
of 1,00 characters. Now, why do we do
01:55:41
this? Because we want to prevent a bad
01:55:43
user from posting a large amount of text
01:55:45
and potentially bringing down our system
01:55:48
or at a minimum, we want to prevent them
01:55:50
from wasting our tokens. So, we should
01:55:52
always apply constraint on the min and
01:55:54
max length of our strings. Again, we
01:55:57
provide a custom error and say prompt is
01:56:00
too long. Max 1,000 characters.
01:56:05
Now, when adding multiple validation
01:56:07
rules, I like to break my code down into
01:56:10
multiple lines. That makes it easier to
01:56:12
see things
01:56:14
like this. That's better. So, that's the
01:56:16
prompt property. Now, in our request, we
01:56:18
also want to have a property called
01:56:20
conversation ID. This should also be a
01:56:23
string. But here we don't want to apply
01:56:25
a min and max length. Instead, we want
01:56:27
to make sure that this is a valid UYU
01:56:30
ID. That's short for universally unique
01:56:33
identifier. So, UYU ID or GWIT, they're
01:56:36
the same thing. Now that we have a chat
01:56:38
schema, we go to our route handler. The
01:56:41
first thing we're going to do is
01:56:42
validate our incoming request data. So,
01:56:45
here we call chat schema
01:56:49
safe parse and pass request.
01:56:53
This returns an object. We store it in a
01:56:56
constant called parse result.
01:57:00
Next, we check if parse result is not
01:57:03
successful,
01:57:06
then we set the status of the response
01:57:08
to 400, which means bad request. This is
01:57:12
the standard error code we use when the
01:57:14
client sends bad data to the server.
01:57:16
Also, in the body of the response, we
01:57:18
want to add a JSON object to provide
01:57:20
error messages to the client. We can get
01:57:22
that object from parse result dot error
01:57:26
dot format.
01:57:29
Okay. Now finally we return. So the rest
01:57:32
of this method is not executed. Now
01:57:34
let's test this. So let's go to the
01:57:36
postman window. Let's see what happens
01:57:38
if we pass an empty string for the
01:57:41
prompt.
01:57:43
All right. Here's what we get. We get an
01:57:45
object with three properties. errors
01:57:47
which contains common errors as well as
01:57:50
prompt and conversation ID which contain
01:57:53
specific error messages for these
01:57:55
properties. So for prompt we have an
01:57:57
error saying prompt is required and for
01:57:59
conversation ID we have another error
01:58:01
saying invalid Uyu ID. Now let's see
01:58:04
what happens if we pass a few white
01:58:06
spaces. So with this we have a string
01:58:09
that is at least one character long. But
01:58:11
a string with white spaces is not a
01:58:13
valid prompt. So let's see what happens.
01:58:16
The error for the prompt property is
01:58:18
gone which is not good. So to prevent
01:58:20
this we have to go back to our schema.
01:58:24
And before applying min and max rules
01:58:27
first we trim the string. With this we
01:58:30
get rid of the white spaces at the
01:58:32
beginning or end of our string. Now
01:58:35
let's send this request one more time.
01:58:37
All right the error for the prompt
01:58:39
property is back. Great. So let's pass a
01:58:42
valid prompt like what is the capital of
01:58:45
France.
01:58:46
Now for conversation ID we need to pass
01:58:48
a valid Gwid. How do we do that? Well we
01:58:52
can install an extension for generating
01:58:55
grids. So here search for Gwid or UIU
01:58:58
ID. There are a lot of different
01:58:59
extensions. I use this one UU ID
01:59:02
generator. Now back to our request
01:59:05
editor. We put the cursor here. Bring up
01:59:08
the command pallet and search for UYU
01:59:11
ID. We have two commands. The first one
01:59:13
is generate UU ID at cursor. Let's
01:59:16
select this command. Now we have this UU
01:59:19
ID. Copy to clipboard. Let's replace con
01:59:22
one with this valid UIU ID. Now we send
01:59:26
this request. Okay, it's gone through
01:59:29
and we got a response from OpenAI.
01:59:34
[Music]
01:59:39
All right. Now that we have added basic
01:59:40
input validation, the next step is to
01:59:42
handle unexpected errors more
01:59:44
gracefully. In this lesson, we'll update
01:59:46
our API to catch and respond to runtime
01:59:48
errors and return a proper error message
01:59:51
to the client. So look on this line
01:59:53
where we try to get a response from
01:59:55
OpenAI. This line might fail for various
01:59:57
reasons. Maybe the network is down.
02:00:00
Maybe OpenAI servers are down. Perhaps
02:00:02
we run out of tokens. Many different
02:00:04
things can go wrong. Right now, we are
02:00:06
not handling errors. So, to demonstrate
02:00:08
this, I want to add an exclamation mark
02:00:11
here to represent an invalid model. Now,
02:00:14
let's see what happens when we send a
02:00:16
request to our API.
02:00:19
All right, we get this HTML document. If
02:00:22
you preview it, here's what we get.
02:00:24
Error 400. The requested model does not
02:00:27
exist. And down below, we have our full
02:00:30
stack trace. So we can see on which line
02:00:33
this error has occurred. This is not a
02:00:35
good response to return from our API. So
02:00:37
instead we're going to handle this error
02:00:39
and return a proper error message to the
02:00:41
client. To do that we're going to add a
02:00:45
try catch block in our route handler.
02:00:48
So try catch
02:00:51
in the try block we add our happy path.
02:00:54
So we add all the code for getting a
02:00:57
response, updating the conversations map
02:01:00
and returning the response. We get all
02:01:03
this code and put it inside the try
02:01:05
block. Okay. Now because this line for
02:01:09
extracting the prompt and conversation
02:01:12
ID is closely related to the rest of the
02:01:14
code, I want to bring this down and put
02:01:16
it inside the try block as well for
02:01:19
clarity. Now in the catch block, we
02:01:21
handle errors.
02:01:24
In case something goes wrong, first we
02:01:25
want to set the status to 500, which
02:01:28
means internal server error. And then we
02:01:31
want to return a JSON object with an
02:01:34
error property saying failed to generate
02:01:38
a response. Okay, now let's see what
02:01:42
happens if we send another request to
02:01:43
our API.
02:01:46
All right, here's what we get. We get a
02:01:47
proper error message that we can show on
02:01:49
the client.
02:01:52
[Music]
02:01:57
Right now there is too much happening in
02:01:59
our chat API. We have some code for
02:02:02
managing conversation state. We have our
02:02:04
schema definition. We have data
02:02:06
validation, the call to open AI. There's
02:02:09
so much happening in this file. There's
02:02:11
no real separation of concerns.
02:02:13
Everything is just mixed together. It's
02:02:15
kind of like a chaotic closet. It's hard
02:02:17
to find a particular t-shirt in this
02:02:19
closet. So now we're going to refactor
02:02:21
or reorganize our code. Refactoring
02:02:24
means changing the structure of the code
02:02:26
without changing its functionality. It's
02:02:28
like reorganizing a chaotic closet.
02:02:30
We're not going to add or remove
02:02:32
clothes. We're just going to move
02:02:33
everything where it belongs. So we're
02:02:36
going to have different sections where
02:02:37
each section has one and only one
02:02:40
purpose. So over the next few lessons,
02:02:42
we're going to refactor our code and
02:02:44
introduce a few layers into our
02:02:45
application. Each layer will be focused
02:02:47
and have a single responsibility. At the
02:02:50
very top, we'll have controllers.
02:02:52
Controllers are responsible for
02:02:54
receiving HTTP requests and returning
02:02:57
HTTP responses. They act like a gateway
02:02:59
into our application. They're kind of
02:03:01
like a receptionist in a building. Below
02:03:03
controllers, we're going to have
02:03:04
services. And this is where we'll have
02:03:06
the actual application logic. For
02:03:09
example, in our chat API, the piece of
02:03:12
code for calling OpenAI to generate a
02:03:14
response belongs to this layer, belongs
02:03:16
to a service. Below services, we'll have
02:03:19
repositories and this is where we have
02:03:21
data. So, anytime we need to get or
02:03:23
store a piece of data, we worked with a
02:03:25
repository. Where that data exists, we
02:03:27
don't care. It could be in the memory or
02:03:29
in a database. If you follow this
02:03:31
architecture, our codebase is going to
02:03:33
be much easier to maintain. If something
02:03:35
breaks or needs to change, we know
02:03:37
exactly where to look. It also improves
02:03:39
readability because each layer or module
02:03:42
has one clear purpose and overall it
02:03:45
makes our application more scalable
02:03:47
because we can reuse and test each piece
02:03:49
independently and plug them into new
02:03:51
features later without duplicating code.
02:03:54
So over the next few lessons, we're
02:03:55
going to refactor our code and extract
02:03:57
these layers one by one.
02:04:00
[Music]
02:04:06
So we talked about the layered
02:04:07
architecture in the previous lesson. Now
02:04:09
in this architecture the direction of
02:04:11
dependency between layers is always from
02:04:14
top to bottom. So controllers can talk
02:04:16
to services and services can talk to
02:04:18
repositories but not the other way
02:04:20
around. So the most fundamental layer in
02:04:22
our application is the repository layer.
02:04:24
In this lesson we'll introduce a
02:04:26
repository and then in the next lesson
02:04:28
we'll introduce a service that will use
02:04:30
our repository. So I told you that
02:04:32
repositories are for data access.
02:04:34
Anytime we need to get or store a piece
02:04:36
of data, we should use a repository. Now
02:04:39
back to our code. Look here we have this
02:04:42
line to keep track of our conversations.
02:04:45
And these two statements for getting and
02:04:47
storing the last response in a
02:04:49
conversation. All these pieces are about
02:04:51
data access and should be encapsulated
02:04:54
inside a repository. So back to our
02:04:56
project here in the server directory we
02:04:59
add a new folder called repositories
02:05:04
and in this folder we add a new file
02:05:07
called conversation
02:05:09
repository.ts.
02:05:12
Now back to index.ts.
02:05:14
First we grab this piece of code for
02:05:18
declaring the conversations map. We cut
02:05:20
it and move it into our repository. Now
02:05:23
in this implementation we are storing
02:05:25
data in memory. This is what we call
02:05:28
implementation detail. Now when defining
02:05:31
our modules we don't want to expose
02:05:33
implementation detail. So we don't want
02:05:35
to export this constant from this
02:05:37
module. Instead we should export what we
02:05:40
call the public interface of the module.
02:05:43
Let me give you a metaphor. Think of a
02:05:45
remote control. A remote control has a
02:05:47
bunch of buttons on the outside that we
02:05:49
use. But it also has a complex
02:05:51
electronic board on the inside that we
02:05:52
don't care about. That's the
02:05:54
implementation detail and the buttons on
02:05:56
the outside are the public interface. So
02:05:58
when creating this module, we want to
02:06:00
keep the implementation detail private
02:06:02
and only export the public interface. In
02:06:05
this application, we need two functions
02:06:08
for getting and setting the last
02:06:10
response ID. So we export a function
02:06:13
called get last response ID.
02:06:17
We give it a parameter conversation ID
02:06:19
of type string.
02:06:22
Now we go to our index.ts
02:06:24
and grab
02:06:27
this piece of code,
02:06:29
cut it and move it into our module. And
02:06:33
of course we return the result. With
02:06:36
this in place we go to index.ts ts and
02:06:39
here we call get last response
02:06:42
and pass the conversation ID.
02:06:46
Similarly, we export another function
02:06:49
called set last response ID. We give it
02:06:52
two parameters conversation ID which is
02:06:55
a string and response ID which is also a
02:06:59
string.
02:07:01
Now we go back to index.ts ts and grab
02:07:05
this statement, cut it and move it into
02:07:09
this function. We just have to make a
02:07:11
tiny change. All right, good. Now back
02:07:13
to index.ts.
02:07:15
Here we call set last response ID and
02:07:18
give it two arguments conversation ID
02:07:21
and response.
02:07:23
ID.
02:07:24
Now with this change, we have kept the
02:07:27
implementation detail private and only
02:07:30
exported these two functions. That means
02:07:33
this index.ts module doesn't know
02:07:36
anything about where the data is stored.
02:07:38
Right now it's in memory. If in the
02:07:40
future we decide to modify our
02:07:42
repository and store the data in a
02:07:44
database, this module is not going to be
02:07:46
affected because it's only dependent on
02:07:49
these two functions for getting and
02:07:51
setting the last response ID. There's
02:07:54
just a tiny problem here with our
02:07:56
current implementation. These two
02:07:57
functions look kind of like utility
02:08:00
functions. The responsibility or the
02:08:02
layer they belong to is not quite clear.
02:08:05
So here we're going to take a different
02:08:06
approach. We're going to go to our
02:08:08
repository. Instead of exporting these
02:08:11
to standalone functions, we're going to
02:08:13
export a constant called conversation
02:08:17
repository.
02:08:19
This is an object with two methods. I'm
02:08:22
going to grab this function definition,
02:08:25
copy it, paste it here. Now, let's also
02:08:30
do the same for this other function.
02:08:35
So, now we're exporting an object called
02:08:37
conversation repository with these two
02:08:40
methods.
02:08:42
Now, we can remove these functions.
02:08:45
Save. Back to index.ts.
02:08:48
Now on the top we have these two errors
02:08:51
because these functions no longer are
02:08:53
exported. So instead we import
02:08:55
conversation repository from this
02:08:57
module. Now down here we prefix these
02:09:01
function calls with conversation
02:09:03
repository
02:09:06
like this. Now it's quite clear in the
02:09:09
code that we're asking the conversation
02:09:11
repository to give us the last response
02:09:14
ID. So that was our first step. In the
02:09:16
next lesson, we're going to introduce
02:09:18
the chat service.
02:09:21
[Music]
02:09:27
All right. Now, we're going to introduce
02:09:28
a service and this is where we'll have
02:09:30
the actual application logic. So, I told
02:09:32
you that controllers act as gateways. A
02:09:35
controller receives an HTTP request. It
02:09:38
validates it. If it's valid, it calls a
02:09:40
service to do the job. So a service
02:09:43
shouldn't know anything about HTTP
02:09:45
requests and responses. Now here in our
02:09:48
chat API up to this point we are working
02:09:50
with the request object and down here we
02:09:54
are working with the response. So all
02:09:56
the code in between belongs to a
02:09:58
service. So let's go to our project and
02:10:01
here in the server directory add a new
02:10:03
directory called services.
02:10:06
In this directory we add a new file
02:10:09
called chat.service.ts.
02:10:11
TS.
02:10:13
Now here we export an object called chat
02:10:16
service and give it a single method that
02:10:19
is send message. Here we need two
02:10:22
parameters prompt which is a string and
02:10:25
conversation ID which is also a string.
02:10:30
Now back to our index module. Let's grab
02:10:35
this piece of code for calling OpenAI to
02:10:38
generate a response and also this line
02:10:41
for updating the last response ID in our
02:10:44
conversation. So let's cut these lines
02:10:46
and move them to our chat service.
02:10:49
Now here we need to import the
02:10:51
conversation repository. So real quick.
02:10:55
Okay. Now we have an error on this line
02:10:57
because we are using the await keyword.
02:11:00
So let's mark this as async. We should
02:11:03
also bring the client object from our
02:11:06
index module. So back to the index
02:11:09
module. Now let me show you another cool
02:11:11
shortcut. Press command and P on Mac or
02:11:14
control and P on Windows. Earlier we
02:11:17
used this shortcut to jump to a file in
02:11:19
our project. Now here if we type an at
02:11:21
sign we can jump to a symbol in this
02:11:24
file. A symbol can be a variable, a
02:11:26
constant, a function and so on. So here
02:11:29
I want to find the client object. There
02:11:32
you go. Now we can grab these few lines,
02:11:34
cut and move them into this module. Now
02:11:39
let's import open AI.
02:11:42
Good. Now in this implementation,
02:11:45
this client object again is
02:11:47
implementation detail. So we don't want
02:11:49
to export it outside of this module. So
02:11:51
this is the only module in our
02:11:53
application where we know what LLM we're
02:11:55
going to use. If tomorrow we decide to
02:11:57
move away from open AAI and use a
02:12:00
different LLM, this is the only module
02:12:02
we should modify. So the consumer of
02:12:04
this module which is going to be our
02:12:06
index module shouldn't know what LLM we
02:12:09
are using under the hood. That is the
02:12:11
implementation detail and what we are
02:12:13
exporting here is the public interface.
02:12:17
Okay. So we don't have any errors in
02:12:19
this module. Now let's go back to our
02:12:22
index module. on the top we can remove
02:12:25
this line for importing conversation
02:12:27
repository as well as open AAI. So our
02:12:30
index module is getting leaner with each
02:12:32
refactoring we have been doing. Okay.
02:12:34
Now we have an error down here. So first
02:12:38
we extract the prompt and conversation
02:12:40
ID from the request body. Right after we
02:12:43
call chat service
02:12:46
dot send message. We give it two
02:12:49
arguments prompt and conversation ID. We
02:12:52
await the call and get a response.
02:12:56
Now here we have an error because I
02:12:58
forgot to return the response object
02:13:01
from our service. So after we get the
02:13:04
response, we update the last response ID
02:13:07
in our conversation repository. And
02:13:10
finally we should return the response.
02:13:12
Okay, no more errors. Everything is
02:13:15
working. But there's a hidden problem
02:13:17
here. Look here we are getting this
02:13:19
response object but this object is
02:13:21
specific to the open AI platform. So
02:13:24
here we are using the output text
02:13:26
property to return a message to the
02:13:28
client. But what if tomorrow we use a
02:13:30
different LLM like Gemini and the
02:13:32
response object we get from Gemini
02:13:34
doesn't have a property called output
02:13:36
text. So back to our chat service.
02:13:40
This chat service object that we are
02:13:42
exposing is what we call a leaky
02:13:46
abstraction. What does it mean? Well,
02:13:49
this is an abstraction over open AAI
02:13:51
because it hides the complexity. It
02:13:53
hides the details. The consumers of this
02:13:56
module like our index module don't know
02:13:58
what LLM we are using under the hood. So
02:14:01
the chat service is an abstraction over
02:14:03
open AAI but it's a leaky abstraction
02:14:06
because some of the details are being
02:14:08
exposed to the outside to the consumers.
02:14:11
In this case we are returning this
02:14:14
response object which is specific to the
02:14:15
OpenAI platform. That's why we say this
02:14:18
service is a leaky abstraction. To solve
02:14:21
this problem we have to introduce a new
02:14:23
type that would be platform agnostic.
02:14:25
This will represent a response from an
02:14:27
LLM. So up here, let's define an
02:14:33
interface. We can also use a type. It
02:14:35
doesn't really make a difference. We
02:14:37
call this chat response and give it two
02:14:40
properties. At a minimum, we need an ID,
02:14:43
which is a string, and a message, which
02:14:46
is also a string.
02:14:48
Next, we annotate this method with its
02:14:51
return type. It should return a promise
02:14:55
of chat response.
02:14:58
Okay, now we have an error because down
02:15:02
here we're returning this response
02:15:04
object which is not an instance of the
02:15:06
type that we just defined. So we have to
02:15:08
return a custom object. Here we add two
02:15:11
properties ID which we set to response
02:15:14
ID and message which we set to
02:15:17
response.output
02:15:19
text. So if tomorrow we decide to use a
02:15:22
different LLM and that LLM doesn't have
02:15:24
an output text property, this is the
02:15:26
only place in our codebase we have to
02:15:28
modify. In other words, this module,
02:15:30
this chat service encapsulates all the
02:15:33
details for working with an LLM and
02:15:35
exposes a simple interface that is our
02:15:38
chat service with the send message
02:15:40
method. Okay, now back to our index
02:15:42
module. Here we have to change output
02:15:44
text to message.
02:15:47
Okay, no more errors. Now, one last
02:15:49
thing before we finish this lesson. I
02:15:51
just noticed that in our chat service,
02:15:54
we are using the wrong model. So, let's
02:15:57
remove this exclamation mark. Great.
02:16:00
We're done with this lesson. In the next
02:16:02
lesson, we'll introduce a controller.
02:16:05
[Music]
02:16:11
Earlier I told you that a controller is
02:16:13
a gateway to our application. It
02:16:15
receives an HTTP request and returns an
02:16:18
HTTP response. As part of this, first it
02:16:22
validates the request data. If it's
02:16:24
invalid, it returns an error. Otherwise,
02:16:27
it calls one or more services to perform
02:16:30
some functionality. And finally, it
02:16:32
returns a response to the client. So
02:16:35
now, we're going to grab all the code
02:16:36
inside this function or this route
02:16:38
handler and move it to a controller. So
02:16:42
back to our project here in our server
02:16:44
application, let's add a new directory
02:16:47
called controllers.
02:16:50
Inside this directory, we add a new file
02:16:52
called chat.controller.ts.
02:16:56
In this module, we export an object
02:16:59
called chat controller and give it a
02:17:02
method called send message with two
02:17:05
parameters request
02:17:09
and response.
02:17:12
Now while this code works it's better to
02:17:14
explicitly import these types from
02:17:16
express. So on the top import type
02:17:20
request and response from express. Now
02:17:25
back to our index module.
02:17:28
Earlier we talked about the shortcut for
02:17:30
extending selection. You can find it up
02:17:32
here. On Mac it's control shift command
02:17:35
and right. So the cursor is here. I'm
02:17:39
going to hold shift, control, and
02:17:40
command with my left hand and keep
02:17:43
pressing the right arrow to extend the
02:17:45
selection. Now we have the semicolon,
02:17:47
now the entire line. Keep going. We have
02:17:50
this entire try catch block. Let's keep
02:17:53
going. And now we have the entire body
02:17:56
of this function selected. Cut. Let's
02:17:59
paste it into our chat controller
02:18:04
and save the changes. Now let's see
02:18:07
what's happening. So we're using the
02:18:09
chat service. Let's import it on the
02:18:11
top.
02:18:13
We're using await. So let's make this
02:18:15
method async.
02:18:18
Also, we need the chat schema. So back
02:18:21
to our index module. Once again, command
02:18:24
P on Mac or controlMP on Windows. We
02:18:27
type an at sign and find chat schema.
02:18:31
Again we extend the selection, grab this
02:18:33
object, cut it and move it to our
02:18:37
controller module.
02:18:39
Here we need to import Z from Zod. Good.
02:18:44
No more errors here. Now in this
02:18:47
implementation, this chat schema is
02:18:51
implementation detail
02:18:53
and this chat controller is the public
02:18:57
interface. So the consumer of this
02:19:00
module which is the index module
02:19:02
shouldn't know what library we are using
02:19:04
for validating data. That is
02:19:06
implementation detail. In other words,
02:19:08
it's none of index modules business. The
02:19:11
index module just needs a method for
02:19:13
sending a message to the application.
02:19:15
How the request is validated doesn't
02:19:17
matter. Right now we are using Zot.
02:19:19
Tomorrow we might use a different
02:19:20
library. If we decide to replace Zot
02:19:23
with something else, this is the only
02:19:24
module we want to modify. We don't want
02:19:26
to modify both this module as well as
02:19:28
the index module. Okay, so no more
02:19:32
errors here. Now back to the index
02:19:34
module. We're going to replace this
02:19:37
lambda function with chat controller
02:19:41
dot send message. Now jump to the top.
02:19:45
Let's remove this unused import
02:19:47
statements. So we can press command and
02:19:49
period on Mac or control and period on
02:19:51
Windows and delete all unused imports.
02:19:54
So again, our index module is getting
02:19:57
cleaner with each refactoring. But we're
02:19:59
not done yet. There's one more
02:20:00
refactoring which we'll do next.
02:20:03
[Music]
02:20:09
Right now, all of our route definitions
02:20:11
are in index.ts. That works fine for
02:20:13
small projects, but as our application
02:20:15
grows, keeping everything one file can
02:20:17
get messy. So in this lesson, we'll move
02:20:19
our route definitions into a separate
02:20:21
file. It's a small change, but it helps
02:20:23
keep our code clean, modular, and easier
02:20:26
to maintain as we add more endpoints.
02:20:29
So, back to our project here in the
02:20:31
server application, we add a new file
02:20:34
called route.ts.
02:20:37
In this file, first we import express
02:20:40
from express.
02:20:43
Now, back to index.ts.
02:20:45
Let's grab all our definitions,
02:20:49
cut, and paste them here.
02:20:53
Now, here we're using the request and
02:20:55
response types. So, let's import them
02:20:58
from express import type request and
02:21:02
response. Okay, good. Now, in this
02:21:04
module, we are not going to work with
02:21:06
the app. Instead, we're going to work
02:21:07
with a router. So, here we create a
02:21:11
router. We set this object to express
02:21:14
router. On this router, we register our
02:21:17
endpoints. Now, let me show you another
02:21:19
cool shortcut. Let's say we want to
02:21:21
rename all instances of app to router.
02:21:24
With app selected under the selection
02:21:27
menu, look at the shortcut for select
02:21:30
all occurrences. On Mac, it's shift
02:21:33
command and L. On Windows, it's probably
02:21:35
shift control and L. So, I'm going to
02:21:38
press shift command and L. Now we have
02:21:41
multicursor editing. All instances of
02:21:43
app are selected. So we press backspace
02:21:46
and replace them all with router. Now to
02:21:49
jump out of multicursor editing, we
02:21:51
press the escape button twice. Okay,
02:21:54
good. Now down the bottom, we should
02:21:57
import chat controller.
02:22:00
Okay, now technically in a real
02:22:03
application, we shouldn't have our route
02:22:06
handlers. We should only have a
02:22:07
reference to a function inside a
02:22:09
controller, but we added these earlier
02:22:11
for demonstration as part of setting up
02:22:14
our full stack project. These are
02:22:16
oneliners. I'm not worried about them at
02:22:18
this point, so we don't need to change
02:22:19
them. Now, finally, at the end, we
02:22:21
export the router as the default object
02:22:24
from this module. I'm using default
02:22:26
because this is the only object we
02:22:28
should export from this module. Now,
02:22:31
back to our index module on the top. We
02:22:34
don't need this line anymore. Let's
02:22:36
remove it and also chat controller.
02:22:38
Instead, we import router from the
02:22:42
current folder / routes. So, we
02:22:46
configure our environment variables.
02:22:48
Next, we create an application. We use
02:22:51
the JSON middleware right after we add
02:22:55
our router. Okay, so these three lines
02:22:58
are closely related. They're all about
02:23:00
setting up our app. They're a little bit
02:23:02
different from initializing the port. So
02:23:04
I like to add a line break between the
02:23:06
two. So we create the app, initialize
02:23:08
the port and finally start the app. Now
02:23:11
our index module is much cleaner. We
02:23:13
only have the necessary code for
02:23:15
starting the application. All the
02:23:17
details are somewhere else. Now let's
02:23:19
review what we have done so far. So we
02:23:21
created the routes module where we have
02:23:24
our routes or endpoints. Now right now
02:23:27
we only have a single file for this
02:23:28
purpose. But as our application grows,
02:23:31
we might have various route files for
02:23:33
different functional areas in our
02:23:34
application. For example, we can have a
02:23:36
route file for registering all the
02:23:38
routes related to products and
02:23:40
categories. We can have another route
02:23:42
file for managing orders. We can have
02:23:44
another route file for admin endpoints
02:23:46
and so on. Okay. So in this route file,
02:23:49
we are using the chat controller as our
02:23:51
route handler. Let's take a quick look
02:23:53
at this method.
02:23:55
This controller is the gateway to our
02:23:57
application. It receives an HTTP
02:24:00
request, validates it. If it's valid, it
02:24:03
asks the service to do the job and
02:24:05
finally it returns a response. Now, in
02:24:08
our service, we have the application
02:24:10
logic. So, for a chat API, we have the
02:24:13
code for calling an LLM to generate a
02:24:16
response and updating the last response
02:24:19
ID in our conversation repository. In
02:24:22
our repository, we only have data access
02:24:24
code. There is no HTTP request here.
02:24:27
There is no LLM call. There is no
02:24:29
middleware setup. So over the past few
02:24:31
refactorings, we broke down index.ts
02:24:34
into a set of small and focused modules.
02:24:37
Each having a single responsibility. The
02:24:39
repository has data access code. The
02:24:42
service has application logic. The
02:24:44
controller acts as the gateway and the
02:24:46
routes has all the route definitions. So
02:24:49
we're done with implementing the back
02:24:51
end. Over the next few lessons, we'll
02:24:52
start building the front end.
02:24:57
Now that the back end is ready, it's
02:24:59
time to move on to the front end. Just
02:25:01
like we did with the back end, we're
02:25:02
going to build a fully functioning
02:25:04
chatbot step by step. And once
02:25:06
everything works the way it should,
02:25:08
we'll take time to refactor and organize
02:25:10
our code to keep it clean and modular.
02:25:12
Let's jump in.
02:25:15
And that's it for this tutorial. What
02:25:16
you just watched is the first two hours
02:25:18
of my full 7-hour course on building AI
02:25:22
powered apps. If you enjoy this and want
02:25:24
to keep going, the full course covers
02:25:26
everything in much more depth. You'll
02:25:28
find the link in the description. I
02:25:29
would love to have you join me in the
02:25:31
full course, and I can't wait to see
02:25:32
what you will build.

Description:

This AI course teaches you how to build AI-powered apps with React & Express. You’ll learn about LLMs, prompt engineering, and full-stack AI integration. 🚀 Want to dive deeper? - Get the full course: https://codewithmosh.com//p/build-ai-powered-apps - Subscribe for more videos like this: https://www.youtube.com/channel/UCWv7vMbMWH4-V0ZXdmDpPBA?sub_confirmation=1 💡 Related tutorials https://www.youtube.com/watch?si=ALGbprFSjzSnVfms&v=SqcY0GlETPk https://www.youtube.com/watch?si=NamrMlwEwC99MIfQ&v=d56mG7DezGs ✋ Stay connected: - Full Courses: https://codewithmosh.com - Twitter: https://twitter.com/moshhamedani - Facebook: https://www.facebook.com/unsupportedbrowser - Instagram: https://www.facebook.com/unsupportedbrowser - LinkedIn: https://www.linkedin.com/school/codewithmosh/ 📖 Chapters 0:00:00 Welcome 0:01:26 Prerequisites 0:02:21 What You’ll Learn 0:06:15 Setting Up Your Development Environment 0:07:12 Introduction to AI Models 0:07:48 Rise of AI Engineering 0:11:49 What Are Large Language Models? 0:16:12 What Can You Do With Language Models? 0:18:37 Understanding Tokens 0:21:40 Counting Tokens 0:25:43 Choosing the Right Model 0:30:45 Understanding Model Settings 0:39:32 Calling Models 0:47:07 Setting Up a Modern Full-Stack Project 0:48:19 Setting Up Bun 0:49:51 Creating the Project Structure 0:52:39 Creating the Backend 0:59:18 Managing OpenAI API Key 1:04:33 Creating the Frontend 1:07:18 Connecting the Frontend and Backend 1:12:31 Running Both Apps Together 1:15:55 Setting Up TailwindCSS 1:19:30 Setting Up ShadCN/UI 1:26:00 Formatting Code With Prettier 1:31:02 Automating Pre-Commit Checks With Husky 1:37:53 Project: Building a ChatBot 1:38:22 Building the Backend 1:38:58 Building the Chat API 1:45:25 Testing the API 1:47:22 Managing Conversation State 1:53:44 Input Validation 1:59:33 Error Handling 2:01:52 Refactoring the Chat API 2:04:00 Extracting Conversation Repository 2:09:21 Extracting Chat Service 2:16:05 Extracting Chat Controller 2:20:03 Extracting Routes 2:24:55 Building the Frontend

Mediafile available in formats

popular icon
Popular
hd icon
HD video
audio icon
Only sound
total icon
All
* — If the video is playing in a new tab, go to it, then right-click on the video and select "Save video as..."
** — Link intended for online playback in specialized players

Questions about downloading video

question iconHow can I download "AI Course for Developers – Build AI-Powered Apps with React" video?arrow icon

    http://univideos.ru/ website is the best way to download a video or a separate audio track if you want to do without installing programs and extensions.

    The UDL Helper extension is a convenient button that is seamlessly integrated into YouTube, Instagram and OK.ru sites for fast content download.

    UDL Client program (for Windows) is the most powerful solution that supports more than 900 websites, social networks and video hosting sites, as well as any video quality that is available in the source.

    UDL Lite is a really convenient way to access a website from your mobile device. With its help, you can easily download videos directly to your smartphone.

question iconWhich format of "AI Course for Developers – Build AI-Powered Apps with React" video should I choose?arrow icon

    The best quality formats are FullHD (1080p), 2K (1440p), 4K (2160p) and 8K (4320p). The higher the resolution of your screen, the higher the video quality should be. However, there are other factors to consider: download speed, amount of free space, and device performance during playback.

question iconWhy does my computer freeze when loading a "AI Course for Developers – Build AI-Powered Apps with React" video?arrow icon

    The browser/computer should not freeze completely! If this happens, please report it with a link to the video. Sometimes videos cannot be downloaded directly in a suitable format, so we have added the ability to convert the file to the desired format. In some cases, this process may actively use computer resources.

question iconHow can I download "AI Course for Developers – Build AI-Powered Apps with React" video to my phone?arrow icon

    You can download a video to your smartphone using the website or the PWA application UDL Lite. It is also possible to send a download link via QR code using the UDL Helper extension.

question iconHow can I download an audio track (music) to MP3 "AI Course for Developers – Build AI-Powered Apps with React"?arrow icon

    The most convenient way is to use the UDL Client program, which supports converting video to MP3 format. In some cases, MP3 can also be downloaded through the UDL Helper extension.

question iconHow can I save a frame from a video "AI Course for Developers – Build AI-Powered Apps with React"?arrow icon

    This feature is available in the UDL Helper extension. Make sure that "Show the video snapshot button" is checked in the settings. A camera icon should appear in the lower right corner of the player to the left of the "Settings" icon. When you click on it, the current frame from the video will be saved to your computer in JPEG format.

question iconHow do I play and download streaming video?arrow icon

    For this purpose you need VLC-player, which can be downloaded for free from the official website https://www.videolan.org/vlc/.

    How to play streaming video through VLC player:

    • in video formats, hover your mouse over "Streaming Video**";
    • right-click on "Copy link";
    • open VLC-player;
    • select Media - Open Network Stream - Network in the menu;
    • paste the copied link into the input field;
    • click "Play".

    To download streaming video via VLC player, you need to convert it:

    • copy the video address (URL);
    • select "Open Network Stream" in the "Media" item of VLC player and paste the link to the video into the input field;
    • click on the arrow on the "Play" button and select "Convert" in the list;
    • select "Video - H.264 + MP3 (MP4)" in the "Profile" line;
    • click the "Browse" button to select a folder to save the converted video and click the "Start" button;
    • conversion speed depends on the resolution and duration of the video.

    Warning: this download method no longer works with most YouTube videos.

question iconWhat's the price of all this stuff?arrow icon

    It costs nothing. Our services are absolutely free for all users. There are no PRO subscriptions, no restrictions on the number or maximum length of downloaded videos.