This project is a full-stack pothole reporting and monitoring platform with a citizen-facing app, an admin dashboard, an Express/MongoDB backend, and a Flask-based AI detection service. The repo is organized into user-panel and admin-panel frontends, plus a Node backend and a Flask service.
- Citizen web app (Vite + React + TypeScript) with Firebase authentication and protected routes.
- Report creation with image upload, geolocation and metadata, stored in MongoDB.
- AI-powered image analysis via a Flask
/predict-imageendpoint. - Government admin dashboard (Create React App) with map visualization using Leaflet and analytics using Recharts.
- Admin login with token-based authentication against
http://localhost:5000/api/login.
This section summarizes the main modules and how they relate.
user-panel/– Citizen UI (Vite + React + TS + Tailwind + shadcn/ui).user-panel/backend/– Node/Express API integrating with MongoDB and Flask AI.user-panel/models/Report.js– MongoDB schema for stored reports.user-panel/integrations/firebase/– Firebase auth, Firestore and Storage client.admin-panel/– Admin dashboard (CRA + Tailwind).- Flask AI service – Python app with
/predict-imageroute listening on port5001.
This diagram shows how the pieces connect.
flowchart LR
%% CLUSTERS
subgraph CitizenApp ["Citizen Web App - User Panel"]
U["User Browser"]
end
subgraph Firebase ["Firebase Services"]
FA["Auth"]
FD["Firestore and Storage"]
end
subgraph NodeAPI ["Node Express API - User Backend"]
E["Express API 5000"]
M["MongoDB Database"]
end
subgraph FlaskAI ["Flask AI Detection Service"]
F["Predict Image API 5001"]
end
subgraph AdminApp ["Admin Dashboard Panel"]
A["Admin Browser"]
end
%% CONNECTIONS
U -->|Login and Signup| FA
U -->|Submit Report with Image| E
E -->|Store and Query Data| M
E -->|Send Image to AI| F
F -->|AI Response| E
FA -->|Authentication State| U
U -->|Retrieve Dashboard Data| FD
A -->|Login Request| E
A -->|Fetch Reports and Stats| E
E -->|Return Aggregated Data| A
Make sure you have these installed before running the project.
- Node.js (LTS) and npm or bun.
- Python 3 for the Flask AI service.
- MongoDB instance (Atlas or self-hosted).
- A Firebase project (Web app) with Email/Password auth enabled.
The user panel is a Vite React + TypeScript application with Tailwind and shadcn/ui.
-
Install dependencies:
cd user-panel npm install # or bun install -
Create an
.envfile inuser-panel/:VITE_FIREBASE_API_KEY=... VITE_FIREBASE_AUTH_DOMAIN=... VITE_FIREBASE_PROJECT_ID=... VITE_FIREBASE_STORAGE_BUCKET=... VITE_FIREBASE_MESSAGING_SENDER_ID=... VITE_FIREBASE_APP_ID=...These keys are read in
src/integrations/firebase/client.ts. -
Start the dev server:
npm run devVite is configured to run on port
8080.
- React + TypeScript + Vite entry point:
src/main.tsx,src/App.tsx. - Auth via
AuthProviderwrapping React Router routes anduseAuthhook. - Themed UI via
ThemeProviderandThemeToggle. - Dashboard and reporting sections in
src/components/dashboard-section.tsxandsrc/components/report-section.
The Node backend exposes APIs for reports and bridges to MongoDB and the Flask AI service.
-
Install dependencies:
cd user-panel/backend npm install(If there is no
package.jsonyet, create one and addexpress,mongoose,cors,multer,axios, andform-data.) -
Configure MongoDB:
-
Update
mongoose.connect(...)inserver.jsto use an environment variable instead of the hard-coded string. -
For example:
// server.js (example change, not yet in repo) mongoose.connect(process.env.MONGODB_URI); -
Then create a
.env(or similar) with your MongoDB connection string.
-
-
Start the API server:
node server.jsThe API listens on port
5000.
- Mounts
/apiroutes and connects to MongoDB. utils/reports.jsdefines the/reportroute that:- Accepts multipart image uploads.
- Sends the image to Flask at
http://127.0.0.1:5001/predict-image. - Stores the full report plus AI result into MongoDB using the
Reportmodel.
The Flask service performs image analysis and returns predictions to the Node backend.
-
Create and activate a Python virtual environment.
-
Install Flask and required ML / image libraries according to your model code.
-
Ensure the app exposes
POST /predict-imageand runs on port5001:if __name__ == "__main__": app.run(host="0.0.0.0", port=5001, debug=True)This is referenced by the
axios.post("http://127.0.0.1:5001/predict-image", ...)call inutils/reports.js.
The admin panel is a Create React App-based dashboard that consumes the Node API and displays map and analytics views.
-
Install dependencies:
cd admin-panel npm install -
Start the dev server:
npm startCRA defaults to port
3000.
src/App.jsswitches betweenLoginandDashboardbased on local storage token and user.src/pages/Login.jsposts credentials tohttp://localhost:5000/api/login, then stores a JWT token and user data.src/pages/Dashboard.jsshows:- Leaflet map with damage markers.
- Recharts graphs for severity, trends, and other stats.
Demo credentials are shown in the login page UI (for local development only).
Follow this order to get a working end-to-end system.
- Start MongoDB and ensure the connection string is valid for the Node backend.
- Start the Flask AI service on port
5001. - Start the Node backend (
user-panel/backend/server.js) on port5000. - Start the user-panel Vite dev server (port
8080). - Start the admin-panel CRA dev server (port
3000).
You should now be able to:
- Visit
http://localhost:8080to use the citizen-facing app. - Visit
http://localhost:3000to log into the admin portal.
This section documents the main HTTP endpoints used across the project.
❌ API Block Error
Invalid JSON format: control character error, likely incorrect encoding
The route is defined in utils/reports.js and mounted under /api in backend/server.js.
Purpose:
Internal endpoint used by the Node backend to run AI-based detection on uploaded road images.
POST http://127.0.0.1:5001/predict-image
| Header | Type | Required | Value |
|---|---|---|---|
| Content-Type | string | Yes | multipart/form-data |
| Field | Type | Required | Description |
|---|---|---|---|
| image | file | Yes | Image forwarded from Node backend |
curl -X POST "http://127.0.0.1:5001/predict-image" \
-H "Content-Type: multipart/form-data" \
-d "image=@sample.jpg"{
"success": true,
"detections": [
{ "label": "pothole", "confidence": 0.97, "bbox": [x1, y1, x2, y2] }
],
"output_image": "path_or_url_to_annotated_image"
}{
"success": false,
"error": "Error message"
}The Node backend calls this endpoint using:
axios.post("http://127.0.0.1:5001/predict-image", ...)Authenticate an admin user and return a JWT token + basic user information.
POST http://localhost:5000/api/login
| Header | Type | Required | Value |
|---|---|---|---|
| Content-Type | string | Yes | application/json |
{
"username": "admin",
"password": "government123"
}curl -X POST "http://localhost:5000/api/login" \
-H "Content-Type: application/json" \
-d '{
"username": "admin",
"password": "government123"
}'{
"token": "jwt_token_string",
"user": {
"name": "Admin User",
"role": "admin"
}
}{
"message": "Login failed"
}{
"message": "Internal server error"
}Matches usage in admin-panel/src/pages/Login.js, where token and user fields are expected.
This project integrates a lightweight YOLO-based deep learning model trained for
pothole and road anomaly detection.
It powers the backend image analysis pipeline and supports real-time detection
through a Flask inference service.
| Attribute | Value |
|---|---|
| Architecture | YOLO-based object detection |
| Total Layers | 129 |
| Parameters | 3,011,628 (~3M) |
| Gradients | 0 (frozen inference weights) |
| Compute Requirement | 8.2 GFLOPs per image |
| Input Resolution | 640 × 640 |
This compact design enables high performance while remaining efficient for:
- laptops
- edge devices
- microservers
- cloud deployment
- Backend receives an image from the mobile app / admin dashboard
- Flask API loads
best.ptand performs inference - Model returns:
- detected potholes
- confidence score
- bounding box values
- annotated output image
- Smart City road monitoring
- Municipal complaint automation
- Fleet vehicle camera systems
- Citizen reporting apps
- Autonomous maintenance assessment
flowchart TB
App[Mobile App] --> API[Node Backend API]
Dashboard[Admin Dashboard] --> API
API --> DB[(MongoDB)]
API --> AI[Flask YOLO Inference Service]
AI --> Model[(best.pt Model File)]
API --> Storage[(Image Storage)]
API --> Dashboard
API --> App
✔ Lightweight – deployable on low-power hardware
✔ Fast inference – suitable for real-time use
✔ Modular – backend and AI are decoupled
✔ Expandable – can be retrained for cracks, speed breakers, etc.
| File | Purpose |
|---|---|
best.pt |
AI model weight file |
app.py |
Flask-based inference API exposing /predict-image |
- Support multi-class road damage detection
- Add segmentation masks instead of bounding boxes
- Deploy to edge devices like Jetson Nano / Coral TPU
- Add continuous learning with feedback loop
sequenceDiagram
participant User
participant Backend
participant Flask
participant Model
participant Output
User->>Backend: Upload Image
Backend->>Flask: POST /predict-image
Flask->>Model: Run inference
Model-->>Flask: Return detections
Flask-->>Backend: JSON + annotated output
Backend-->>User: Display result
The trained YOLO-based defect detection model (best.pt) was evaluated across four defect types: crack, pothole, patch, and other.
This section summarizes confidence behavior, accuracy, training curves, dataset distribution, and confusion metrics.
| Class | mAP@0.5 | Peak F1 | Observations |
|---|---|---|---|
| Crack | 0.648 | ~0.60 | Strong detection performance, slight confusion with background |
| Pothole | 0.664 | ~0.62 | Best performing class — clear object structure |
| Patch | 0.412 | ~0.43 | Weakest — likely due to dataset imbalance / ambiguity |
| Other | 0.577 | ~0.58 | Moderate performance with confusion vs background |
| Average | 0.575 | ~0.57 | Strong enough for real-world inference |
| Metric | Peak Value | Best Confidence Threshold |
|---|---|---|
| F1 score (overall) | ~0.57 | ~0.28 |
| Precision | ~1.00 | ~0.95 |
| Recall | ~0.85 | ~0.00 |
✔ Increasing confidence improves precision but lowers recall
✔ Best working threshold range → 0.25 – 0.35
✔ Patch class would benefit from augmentation or re-labelling
These plots illustrate how confidence scores affect model stability and output quality.
Overall performance shows mAP@0.5 = 0.575, with per-class values shown in legend.

Shows the exact number of correct/incorrect predictions across classes.
Shows proportional confusion per class for precision analysis.
This figure highlights:
✔ Number of labeled objects per class
✔ Spatial heatmap of object centers
✔ Bounding box size distribution
Loss curves and validation metrics demonstrate consistent convergence throughout training.
✔ Best average operating threshold ~0.28 confidence
✔ Best precision achieved near ~0.95 confidence
✔ Highest confusion occurs between crack vs background
✔ “Patch” class is weakest — likely due to dataset imbalance or class ambiguity
- Improve dataset balance for underrepresented classes
- Add harder background cases to reduce false positives
- Consider threshold tuning per class
- Explore augmentation & label refinement for “patch”






