File size: 13,824 Bytes
23654e5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
00aacad
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23654e5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
# Deployment Guide - Participatory Planning Application

## Prerequisites
- Python 3.8+
- 2-4GB RAM (for AI model)
- ~2GB disk space (for model cache)
- Internet connection (first run only)

---

## Option 1: Quick Local Network Demo (5 minutes)

**Perfect for**: Testing with colleagues on same WiFi network

### Steps:

1. **Start the server** (already configured):
   ```bash
   cd /home/thadillo/MyProjects/participatory_planner
   source venv/bin/activate
   python run.py
   ```

2. **Find your IP address**:
   ```bash
   # Linux/Mac
   ip addr show | grep "inet " | grep -v 127.0.0.1

   # Or check the Flask startup message for the IP
   ```

3. **Access from other devices**:
   - Open browser on any device on same WiFi
   - Go to: `http://YOUR_IP:5000`
   - Admin login: `ADMIN123`

4. **Share registration link**:
   - Give participants: `http://YOUR_IP:5000/generate`

**Limitations**:
- Only works on local network
- Stops when you close terminal
- Debug mode enabled (slower)

---

## Option 2: Production Server with Gunicorn (Recommended)

**Perfect for**: Real deployments, VPS/cloud hosting

### Steps:

1. **Install Gunicorn**:
   ```bash
   source venv/bin/activate
   pip install gunicorn==21.2.0
   ```

2. **Update environment variables** (`.env`):
   ```bash
   # Already set with secure key
   FLASK_SECRET_KEY=8606a4f67a03c5579a6e73f47195549d446d1e55c9d41d783b36762fc4cd9d75
   FLASK_ENV=production
   ```

3. **Run with Gunicorn**:
   ```bash
   gunicorn --config gunicorn_config.py wsgi:app
   ```

4. **Access**: `http://YOUR_SERVER_IP:8000`

### Run in background with systemd:

Create `/etc/systemd/system/participatory-planner.service`:

```ini
[Unit]
Description=Participatory Planning Application
After=network.target

[Service]
User=YOUR_USERNAME
WorkingDirectory=/home/thadillo/MyProjects/participatory_planner
Environment="PATH=/home/thadillo/MyProjects/participatory_planner/venv/bin"
ExecStart=/home/thadillo/MyProjects/participatory_planner/venv/bin/gunicorn --config gunicorn_config.py wsgi:app
Restart=always

[Install]
WantedBy=multi-user.target
```

Then:
```bash
sudo systemctl daemon-reload
sudo systemctl enable participatory-planner
sudo systemctl start participatory-planner
sudo systemctl status participatory-planner
```

---

## Option 3: Docker Deployment (Easiest Production)

**Perfect for**: Clean deployments, easy updates, cloud platforms

### Steps:

1. **Install Docker** (if not installed):
   ```bash
   curl -fsSL https://get.docker.com -o get-docker.sh
   sudo sh get-docker.sh
   ```

2. **Build and run**:
   ```bash
   cd /home/thadillo/MyProjects/participatory_planner
   docker-compose up -d
   ```

3. **Access**: `http://YOUR_SERVER_IP:8000`

### Docker commands:
```bash
# View logs
docker-compose logs -f

# Stop
docker-compose down

# Restart
docker-compose restart

# Update after code changes
docker-compose up -d --build
```

**Data persistence**: Database and AI model are stored in volumes (survive restarts)

---

## Option 4: Hugging Face Spaces (Recommended for Public Access)

**Perfect for**: Public demos, academic projects, community engagement, free hosting

### Why Hugging Face Spaces?
- βœ… **Free hosting** with generous limits (CPU, 16GB RAM, persistent storage)
- βœ… **Zero-config HTTPS** - automatic SSL certificates
- βœ… **Docker support** - already configured in this project
- βœ… **Persistent storage** - `/data` directory survives rebuilds
- βœ… **Public URL** - Share with stakeholders instantly
- βœ… **Git-based deployment** - Push to deploy
- βœ… **Model caching** - Hugging Face models download fast

### Quick Deploy Steps

#### 1. Create Hugging Face Account
- Go to [huggingface.co](https://huggingface.co) and sign up (free)
- Verify your email

#### 2. Create New Space
1. Go to [huggingface.co/spaces](https://huggingface.co/spaces)
2. Click **"Create new Space"**
3. Configure:
   - **Space name**: `participatory-planner` (or your choice)
   - **License**: MIT
   - **SDK**: **Docker** (important!)
   - **Visibility**: Public or Private
4. Click **"Create Space"**

#### 3. Deploy Your Code

**Option A: Direct Git Push (Recommended)**
```bash
cd /home/thadillo/MyProjects/participatory_planner

# Add Hugging Face remote (replace YOUR_USERNAME)
git remote add hf https://huggingface.co/spaces/YOUR_USERNAME/participatory-planner

# Push to deploy
git push hf main
```

**Option B: Via Web Interface**
1. In your Space, click **"Files"** tab
2. Upload all project files (drag and drop)
3. Commit changes

#### 4. Monitor Build
- Click **"Logs"** tab to watch Docker build
- First build takes ~5-10 minutes (downloads dependencies)
- Status changes to **"Running"** when ready
- Your app is live at: `https://huggingface.co/spaces/YOUR_USERNAME/participatory-planner`

#### 5. First-Time Setup
1. Access your Space URL
2. Login with admin token: `ADMIN123` (change this!)
3. Go to **Registration** β†’ Create participant tokens
4. Share registration link with stakeholders
5. First AI analysis downloads BART model (~1.6GB, cached permanently)

### Files Already Configured

This project includes everything needed for HF Spaces:

- βœ… **Dockerfile** - Docker configuration (port 7860, /data persistence)
- βœ… **app_hf.py** - Flask entry point for HF Spaces
- βœ… **requirements.txt** - Python dependencies
- βœ… **.dockerignore** - Excludes local data/models
- βœ… **README.md** - Displays on Space page

### Environment Variables (Optional)

In your Space **Settings** tab, add:

```bash
SECRET_KEY=your-long-random-secret-key-here
FLASK_ENV=production
```

Generate secure key:
```bash
python -c "import secrets; print(secrets.token_hex(32))"
```

### Data Persistence

Hugging Face Spaces provides `/data` directory:
- βœ… **Database**: Stored at `/data/app.db` (survives rebuilds)
- βœ… **Model cache**: Stored at `/data/.cache/huggingface`
- βœ… **Fine-tuned models**: Stored at `/data/models/finetuned`

**Backup/Restore**:
1. Use Admin β†’ Session Management
2. Export session data as JSON
3. Import to restore on any deployment

### Training Models on HF Spaces

**CPU Training** (free tier):
- **Head-only training**: Works well (<100 examples, 2-5 min)
- **LoRA training**: Slower on CPU (>100 examples, 10-20 min)

**GPU Training** (paid tiers):
- Upgrade Space to GPU for faster training
- Or train locally and import model files

### Updating Your Deployment

```bash
# Make changes locally
git add .
git commit -m "Update: description"
git push hf main

# HF automatically rebuilds and redeploys
# Database and models persist across updates
```

### Troubleshooting HF Spaces

**Build fails?**
- Check Logs tab for specific error
- Verify Dockerfile syntax
- Ensure all dependencies in requirements.txt

**App won't start?**
- Port must be 7860 (already configured)
- Check app_hf.py runs Flask on correct port
- Review Python errors in Logs

**Database not persisting?**
- Verify `/data` directory created in Dockerfile
- Check DATABASE_PATH environment variable
- Ensure permissions (777) on /data

**Models not loading?**
- First download takes time (~5 min for BART)
- Check HF_HOME environment variable
- Verify cache directory permissions

**Out of memory?**
- Reduce batch size in training config
- Use smaller model (distilbart-mnli-12-1)
- Consider GPU Space upgrade

### Scaling on HF Spaces

**Free Tier**:
- CPU only
- ~16GB RAM
- ~50GB persistent storage
- Auto-sleep after inactivity (wakes on request)

**Paid Tiers** (for production):
- GPU access (A10G, A100)
- More RAM and storage
- No auto-sleep
- Custom domains

### Security on HF Spaces

1. **Change admin token** from `ADMIN123`:
   ```python
   # Create new admin token via Flask shell or UI
   ```

2. **Set strong secret key** via environment variables

3. **HTTPS automatic** - All HF Spaces use SSL by default

4. **Private Spaces** - Restrict access to specific users

### Monitoring

- **Status**: Space page shows Running/Building/Error
- **Logs**: Real-time application logs
- **Analytics** (public Spaces): View usage statistics
- **Database size**: Monitor via session export size

### Cost Comparison

| Platform | Cost | CPU | RAM | Storage | HTTPS | Setup Time |
|----------|------|-----|-----|---------|-------|------------|
| **HF Spaces (Free)** | $0 | βœ… | 16GB | 50GB | βœ… | 10 min |
| HF Spaces (GPU) | ~$1/hr | βœ… GPU | 32GB | 100GB | βœ… | 10 min |
| DigitalOcean | $12/mo | βœ… | 2GB | 50GB | ❌ | 30 min |
| AWS EC2 | ~$15/mo | βœ… | 2GB | 20GB | ❌ | 45 min |
| Heroku | $7/mo | βœ… | 512MB | 1GB | βœ… | 20 min |

**Winner for demos/academic use**: Hugging Face Spaces (Free)

### Post-Deployment Checklist

- [ ] Space builds successfully
- [ ] App accessible via public URL
- [ ] Admin login works (token: ADMIN123)
- [ ] Changed default admin token
- [ ] Participant registration works
- [ ] Submission form functional
- [ ] AI analysis runs (first time slow, then cached)
- [ ] Database persists after rebuild
- [ ] Session export/import tested
- [ ] README displays on Space page
- [ ] Shared URL with stakeholders

### Example Deployment

**Live Example**: See [participatory-planner](https://huggingface.co/spaces/YOUR_USERNAME/participatory-planner) (replace with your Space)

---

## Option 5: Other Cloud Platforms

### A) **DigitalOcean App Platform**

1. Push code to GitHub/GitLab
2. Create new App on DigitalOcean
3. Connect repository
4. Configure:
   - Run Command: `gunicorn --config gunicorn_config.py wsgi:app`
   - Environment: Set `FLASK_SECRET_KEY`
   - Resources: 2GB RAM minimum
5. Deploy!

### B) **Heroku**

Create `Procfile`:
```
web: gunicorn --config gunicorn_config.py wsgi:app
```

Deploy:
```bash
heroku create participatory-planner
heroku config:set FLASK_SECRET_KEY=8606a4f67a03c5579a6e73f47195549d446d1e55c9d41d783b36762fc4cd9d75
git push heroku main
```

### C) **AWS EC2**

1. Launch Ubuntu instance (t3.medium or larger)
2. SSH into server
3. Clone repository
4. Follow "Option 2: Gunicorn" steps above
5. Configure security group: Allow port 8000

### D) **Google Cloud Run** (Serverless)

```bash
gcloud run deploy participatory-planner \
  --source . \
  --platform managed \
  --region us-central1 \
  --allow-unauthenticated \
  --memory 2Gi
```

---

## Adding HTTPS/SSL (Production Requirement)

### Option A: Nginx Reverse Proxy

1. **Install Nginx**:
   ```bash
   sudo apt install nginx certbot python3-certbot-nginx
   ```

2. **Configure Nginx** (`/etc/nginx/sites-available/participatory-planner`):
   ```nginx
   server {
       listen 80;
       server_name your-domain.com;

       location / {
           proxy_pass http://127.0.0.1:8000;
           proxy_set_header Host $host;
           proxy_set_header X-Real-IP $remote_addr;
           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
           proxy_set_header X-Forwarded-Proto $scheme;
       }
   }
   ```

3. **Enable and get SSL**:
   ```bash
   sudo ln -s /etc/nginx/sites-available/participatory-planner /etc/nginx/sites-enabled/
   sudo nginx -t
   sudo systemctl reload nginx
   sudo certbot --nginx -d your-domain.com
   ```

### Option B: Cloudflare Tunnel (Free HTTPS, no open ports)

1. Install: `cloudflared tunnel install`
2. Login: `cloudflared tunnel login`
3. Create tunnel: `cloudflared tunnel create participatory-planner`
4. Route: `cloudflared tunnel route dns participatory-planner your-domain.com`
5. Run: `cloudflared tunnel --url http://localhost:8000 run participatory-planner`

---

## Performance Optimization

### For Large Sessions (100+ participants):

1. **Increase Gunicorn workers** (in `gunicorn_config.py`):
   ```python
   workers = 4  # Or more based on CPU cores
   ```

2. **Add Redis caching**:
   ```bash
   pip install Flask-Caching redis
   ```

3. **Move AI analysis to background** (Celery):
   ```bash
   pip install celery redis
   ```

---

## Monitoring & Maintenance

### View Application Logs:
```bash
# Gunicorn (stdout)
tail -f /var/log/participatory-planner.log

# Docker
docker-compose logs -f

# Systemd
sudo journalctl -u participatory-planner -f
```

### Backup Data:
```bash
# Export via admin UI (recommended)
# Or copy database file
cp instance/app.db backups/app-$(date +%Y%m%d).db
```

### Update Application:
```bash
# Pull latest code
git pull

# Install dependencies
source venv/bin/activate
pip install -r requirements.txt

# Restart
sudo systemctl restart participatory-planner  # systemd
# OR
docker-compose up -d --build  # Docker
```

---

## Troubleshooting

### Issue: AI model download fails
**Solution**: Ensure 2GB+ free disk space and internet connectivity

### Issue: Port already in use
**Solution**: Change port in `gunicorn_config.py` or `run.py`

### Issue: Workers timing out during analysis
**Solution**: Increase timeout in `gunicorn_config.py`:
```python
timeout = 300  # 5 minutes
```

### Issue: Out of memory
**Solution**: Reduce Gunicorn workers or upgrade RAM (need 2GB minimum)

---

## Security Checklist

- [x] Secret key changed from default
- [x] Debug mode OFF in production (`FLASK_ENV=production`)
- [ ] HTTPS enabled (SSL certificate)
- [ ] Firewall configured (only ports 80, 443, 22 open)
- [ ] Regular backups scheduled
- [ ] Strong admin token (change from ADMIN123)
- [ ] Rate limiting added (optional, use Flask-Limiter)

---

## Quick Reference

| Method | Best For | URL | Setup Time |
|--------|----------|-----|------------|
| Local Network | Testing/demo | http://LOCAL_IP:5000 | 1 min |
| Gunicorn | VPS/dedicated server | http://SERVER_IP:8000 | 10 min |
| Docker | Clean deployment | http://SERVER_IP:8000 | 5 min |
| Cloud Platform | Managed hosting | https://your-app.platform.com | 15 min |

**Default Admin Token**: `ADMIN123` (⚠️ CHANGE IN PRODUCTION)

**Support**: Check logs first, then review error messages in browser console (F12)