Blog Post
Building Real Time Tools to Help Yourself: A TensorFlow.js Posture Guard
I caught myself slumped at my desk today. Shoulders rounded, neck forward, wrists at the wrong angle. The kind of position you only notice once it has already given you a headache. Instead of downloading another wellness app, I opened a new project, pulled in TensorFlow.js, and shipped JC Posture Guard before I went to bed.
Building Real Time Tools to Help Yourself: A TensorFlow.js Posture Guard. The future is software you build for your own life.
That is the small idea behind this post. The future is not waiting for someone to make the tool you need. The future is building the tool yourself, in an afternoon, against a real problem in your own life.
Why I built JC Posture Guard today
From a real problem to a working tool, in one sitting
The pain was specific. The fix had to be specific too. A general "stand up more" reminder app does not see how I am sitting. It just nags on a timer. What I wanted was a tool that watches my posture in the browser, draws the keypoints back at me, and tells me when I am drifting forward or hunching.
That is exactly what JC Posture Guard does. The webcam stays in the browser. Nothing is uploaded. Inference runs locally on every frame. When my posture slips, the indicator changes color and I get a real-time nudge to sit back up.
The whole thing took less time than a meeting. That is the part worth paying attention to.
TensorFlow.js is Machine Learning, not AI
ML is one branch of AI, not the same thing
I want to clear something up before going further, because the words get blurred in marketing copy and on social media. TensorFlow.js is a machine learning library. It is not an AI assistant. The two are related, but they are not the same.
Artificial intelligence is the broad goal — machines that perform tasks we associate with human intelligence: reasoning, planning, generating language, making judgements, holding a conversation. Machine learning is one technique for getting there. It is a subset of AI focused on systems that learn statistical patterns from data and then use those patterns to make predictions.
Posture detection sits squarely in the ML half of that picture. The model has been trained on millions of labeled images of human bodies. It does not "understand" your spine. It maps pixels to coordinates of joints. That is pattern recognition, not cognition.
What machine learning actually does
Machine learning is about prediction over patterns. You give a model a lot of examples, it learns the statistical regularities in those examples, and at inference time it produces an output that fits the pattern. Image classifiers, speech-to-text, recommendation engines, fraud scoring, pose estimation. This is all machine learning.
A trained ML model is a frozen function. It is fast, deterministic given the same input, and it does not reason about its output. If you show it something far outside the training distribution, it will still confidently produce a guess. That is a feature in some contexts and a serious risk in others.
What artificial intelligence has come to mean
When most people say "AI" today, they mean large language models like the ones behind chat assistants. Those systems are also built using machine learning under the hood. They are trained on huge amounts of text, but the surface behavior feels different. They generate language, follow instructions, hold context across a conversation, and reason about ambiguous prompts.
That is the experience that has reshaped expectations. AI in the modern sense is conversational, generative, and flexible. ML in the older sense is narrow, fast, and focused. A single trained model doing one job very well.
Why the distinction matters
A model that runs on every frame in the browser
The distinction matters because it changes what you ship and how it runs. Posture detection in the browser is narrow ML — a single pre-trained pose model running locally, 30+ frames per second, no API calls, no token costs, no latency from a server round trip.
If I had built this with a chat AI behind it, every frame would mean a network request, real money per call, and a feedback loop too slow to actually catch posture in real time. Choosing the right tool comes down to honest engineering: what does the problem actually need?
- Use ML when you need fast, narrow, deterministic predictions on a stream of data.
- Use AI when you need flexible reasoning, conversation, generation, or open-ended output.
- Use both when the perception layer is ML and the reasoning layer is AI.
How pose detection works in the browser
17 keypoints, plotted live on every frame
JC Posture Guard uses a pose estimation model that takes a video frame and returns the pixel coordinates of seventeen keypoints such as eyes, ears, shoulders, elbows, wrists, hips, knees, ankles. From those coordinates you can derive everything else: shoulder angle, head-forward position, slouch detection, even symmetry between left and right.
The math is light. Once the keypoints come back from the model, posture rules are just geometry. Is the head past the shoulders? Are the shoulders below their baseline? Is the back curved beyond a threshold? That is not AI deciding for you. That is plain trigonometry on top of a fast ML output.
The win is that the whole pipeline runs in your browser tab. No server, no upload, no privacy compromise. The webcam frame never leaves the page.
When ML and AI work together
The most interesting tools coming next will combine both. ML handles the perception: vision, audio, sensor streams, pose, gesture, intent classification. AI handles the conversation around that perception:explaining, suggesting, summarizing, planning.
A few realistic combinations:
- Posture coach. ML detects slouching in real time. An AI layer writes a short, personalized stretch routine at the end of the day based on which muscle groups got tight.
- Workspace assistant. ML watches focus and idle time. AI writes a calm end-of-day recap and suggests adjustments for tomorrow.
- Accessibility tool. ML reads sign language gestures from a webcam. AI translates and rewrites the meaning into natural sentences for the receiver.
- Home safety device. ML detects a fall. AI generates the right calm message to the right contact, with context about what happened.
The pattern is the same in each case: fast, narrow ML at the edge, flexible AI on top of the result. That is the architecture worth learning.
The future is tools you build for yourself
The barrier to building has dropped. A pre-trained pose model is a few imports away. A chat model is an API call away. A static site host gives you a public URL in minutes. The skill that matters now is noticing a friction point in your own day and turning it into a working tool before the day ends.
You do not need a startup. You do not need a roadmap. You need to notice the small thing that is wasting your attention or hurting your body and decide that you are allowed to fix it. JC Posture Guard exists because I got tired of slouching at my desk. That is the entire origin story.
Try it yourself
If you want to see it run, open JC Posture Guard in a browser, give the page permission to use your camera, and sit at your desk the way you usually do. The keypoints will draw onto the live feed. Adjust your shoulders, lower your chin, sit back — the indicator will respond.
Nothing about your video leaves the page. The whole thing is TensorFlow.js running locally.
The source is on GitHub at jessicacousins/tensorflow_posture.
The bottom line
Machine learning is a focused tool. Artificial intelligence is a flexible reasoning partner. They are not interchangeable, and pretending they are is how people end up overpaying for a feature they could ship for free.
When something in your day starts costing you attention or comfort, stop tolerating it. Build something. Ship it before dinner. Use ML where ML belongs and AI where AI belongs. The future is not handed to you. It is the one you build for yourself, in real time, against your own life.