Companion to attendance: capture WHAT was done on site each day,
alongside WHO worked. Optional 1:1 with WorkLog. Mobile-first form
auto-redirected from /attendance/log/ on success (with a Skip link).
Why this design (vs. extending WorkLog or per-project templates):
- Hybrid schema. Stable + queryable fields are real columns
(`weather`, `temperature_min`, `temperature_max`, `notes`,
`created_by`, `created_at`, `updated_at`). The METRICS that change
per project / over time live in a single JSONField with shape
`{counts: {key: int}, checks: {key: bool}}` — driven by
`core/site_report_schema.py`. Adding a new metric is a one-line
edit to that file, NO migration required. Old reports without the
new key just render as 0 / unchecked.
- Two-step flow. Attendance form is unchanged; on successful POST
the supervisor lands on `/site-report/<work_log_id>/edit/` for the
most-recently-created log. They can fill in progress details
(~30 sec on a phone) or click "Skip" to home. WorkLogs without a
SiteReport are completely valid historic rows.
- Permission scope mirrors WorkLog access. Anyone who can see the
parent log (admin / log's supervisor / project's supervisors) can
see + edit its SiteReport. Wraps the existing pattern from
`work_history()` in a small helper `_can_access_site_report()`.
What ships:
Models:
- SiteReport (1:1 → WorkLog, weather choices, IntegerField temps,
JSONField metrics defaulting to {})
- Migration 0013_add_site_report (pure CreateModel, no schema
changes to existing tables)
Schema:
- core/site_report_schema.py (NEW) — single source of truth for
the metric list. Currently 7 counts + 4 checks per Konrad's
v1 spec. Helpers: get_count_keys, get_check_keys, label_for,
empty_metrics.
Form:
- SiteReportForm (in core/forms.py) — ModelForm with the four
stable fields PLUS dynamic IntegerField/BooleanField per
metric in __init__. save() serializes both halves into the
JSON blob. clean() validates min ≤ max temperature.
Views:
- site_report_edit — create-or-update; stamps created_by on
first save; preserves it on subsequent admin edits
- site_report_detail — read-only display; 404 when no report
- attendance_log redirect updated to two-step flow
- _can_access_site_report — shared permission helper
URLs:
- /site-report/<work_log_id>/edit/ (name: site_report_edit)
- /site-report/<work_log_id>/ (name: site_report_detail)
Templates:
- site_report_edit.html — mobile-first stack of inputs, weather
as a chunky icon-button row (☀️ ☁️ 🌧️ ⛈️ 🥵 🥶 💨), counts in a
2-col grid, checks as toggle switches, Notes textarea, Skip
+ Save buttons. Iterates pre-built (metric, bound_field)
pairs from the view to avoid needing a new template filter.
- site_report_detail.html — counts as accent-coloured value
cards, checks as a check-list, weather + temp + notes + edit
link.
- work_history.html — added a small clipboard icon next to
each row's date: filled (linked to detail) when a report
exists, muted outline (linked to edit) when not. Click is
event.stopPropagation()-ed so the row's payroll-modal
handler doesn't also fire.
Performance:
- work_history queryset adds .select_related('site_report') so
the new template indicator doesn't introduce an N+1.
Admin:
- SiteReport registered with raw_id_fields on work_log +
created_by, list filters on weather + project + date.
Tests (16 new, full suite 85/85):
- SiteReportModelTests — defaults, 1:1 reverse accessor,
arbitrary-key JSON round-trip
- SiteReportFormTests — dynamic field generation, save
serialisation, temp validation, instance pre-fill
- SiteReportEditViewTests — admin GET/POST, project
supervisor allowed, outsider supervisor 403, created_by
preserved on subsequent admin edits
- SiteReportDetailViewTests — 404 when absent, displays data
when present
- AttendanceLogRedirectsToSiteReportTests — confirms the
two-step flow
CLAUDE.md updates:
- SiteReport added to "Key Models" with shape + reverse-accessor note
- New "SiteReport metric schema" section near "UI-vs-DB
naming drift" — explains the JSON-column-with-Python-source
pattern, when it's safe, what NOT to do (rename a key with
data), and where the keys appear across the codebase
- URL Routes table gets the two new endpoints
What's NOT in this commit (deferred per the brainstorm plan):
- JournalEntry model + manual web-entry UI (Phase A.2 — depends
on Konrad's Q7 answer about Vi/recipient field)
- Letterly inbound webhook (Phase B — integrations branch only,
depends on Q5 sample payload)
- Photos on site reports (Q9, defaulted to "future")
- Per-project metric templates (Q4, defaulted to "same set for all v1")
Reference plan: ~/.claude/plans/prancy-painting-brook.md (local).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
5083 lines
206 KiB
Python
5083 lines
206 KiB
Python
# === VIEWS ===
|
||
# All the page logic for the LabourPay app.
|
||
# Each function here handles a URL and decides what to show the user.
|
||
|
||
import csv
|
||
import json
|
||
import datetime
|
||
import calendar as cal_module
|
||
from decimal import Decimal
|
||
|
||
from django.shortcuts import render, redirect, get_object_or_404
|
||
from django.utils import timezone
|
||
from django.db import transaction
|
||
from django.db.models import Sum, Count, Q, F, Prefetch, Max, Min
|
||
from django.db.models.functions import Coalesce, TruncMonth
|
||
from django.contrib import messages
|
||
from django.contrib.auth.decorators import login_required
|
||
from django.http import JsonResponse, HttpResponseForbidden, HttpResponse
|
||
from django.middleware.csrf import get_token
|
||
from django.views.decorators.http import require_POST
|
||
from django.urls import reverse
|
||
from django.core.mail import EmailMultiAlternatives
|
||
from django.template.loader import render_to_string
|
||
from django.utils.html import strip_tags
|
||
from django.conf import settings
|
||
|
||
from .models import (
|
||
Worker, Project, WorkLog, Team, PayrollRecord, Loan, PayrollAdjustment,
|
||
WorkerCertificate, WorkerWarning,
|
||
SiteReport,
|
||
)
|
||
from .forms import (
|
||
AttendanceLogForm, PayrollAdjustmentForm, ExpenseReceiptForm, ExpenseLineItemFormSet,
|
||
WorkerForm, WorkerCertificateFormSet, WorkerWarningFormSet,
|
||
TeamForm, ProjectForm,
|
||
SiteReportForm,
|
||
)
|
||
from .site_report_schema import COUNT_METRICS, CHECK_METRICS, label_for
|
||
# NOTE: render_to_pdf is NOT imported here at the top level.
|
||
# It's imported lazily inside process_payment() and create_receipt()
|
||
# to avoid crashing the entire app if xhtml2pdf is not installed on the server.
|
||
|
||
|
||
# === PAYROLL CONSTANTS ===
|
||
# These define which adjustment types ADD to a worker's pay vs SUBTRACT from it.
|
||
# "New Loan" and "Advance Payment" are additive — the worker receives money upfront.
|
||
# "Loan Repayment" and "Advance Repayment" are deductive — they reduce net pay.
|
||
ADDITIVE_TYPES = ['Bonus', 'Overtime', 'New Loan', 'Advance Payment']
|
||
DEDUCTIVE_TYPES = ['Deduction', 'Loan Repayment', 'Advance Repayment']
|
||
|
||
|
||
# === PERMISSION HELPERS ===
|
||
# These small functions check what kind of user is logged in.
|
||
# "Admin" = the boss (is_staff or is_superuser in Django).
|
||
# "Supervisor" = someone who manages teams or projects, or is in the Work Logger group.
|
||
|
||
def is_admin(user):
|
||
"""Returns True if the user is staff or superuser (the boss)."""
|
||
return user.is_staff or user.is_superuser
|
||
|
||
|
||
def is_supervisor(user):
|
||
"""Returns True if the user manages teams, has assigned projects, or is a Work Logger."""
|
||
return (
|
||
user.supervised_teams.exists()
|
||
or user.assigned_projects.exists()
|
||
or user.groups.filter(name='Work Logger').exists()
|
||
)
|
||
|
||
|
||
def is_staff_or_supervisor(user):
|
||
"""Returns True if the user is either an admin or a supervisor."""
|
||
return is_admin(user) or is_supervisor(user)
|
||
|
||
|
||
# === PAY SCHEDULE HELPERS ===
|
||
# These help figure out a worker's pay period based on their team's schedule.
|
||
|
||
def get_worker_active_team(worker):
|
||
"""Return the worker's active team (first one found), or None."""
|
||
return worker.teams.filter(active=True).first()
|
||
|
||
|
||
def get_pay_period(team, reference_date=None):
|
||
"""
|
||
Calculate the current pay period's start and end dates for a team.
|
||
|
||
Returns (period_start, period_end) or (None, None) if the team has
|
||
no pay schedule configured.
|
||
|
||
How it works:
|
||
- pay_start_date is the "anchor" — the first day of the very first pay period.
|
||
- pay_frequency determines the length of each period (7, 14, or ~30 days).
|
||
- We step forward from the anchor in period-length increments until
|
||
we find the period that contains reference_date (today by default).
|
||
"""
|
||
if not team or not team.pay_frequency or not team.pay_start_date:
|
||
return (None, None)
|
||
|
||
if reference_date is None:
|
||
reference_date = timezone.now().date()
|
||
|
||
anchor = team.pay_start_date
|
||
|
||
# === WEEKLY / FORTNIGHTLY ===
|
||
# Simple fixed-length periods (7 or 14 days).
|
||
if team.pay_frequency in ('weekly', 'fortnightly'):
|
||
period_days = 7 if team.pay_frequency == 'weekly' else 14
|
||
|
||
# How many full periods have passed since the anchor?
|
||
days_since_anchor = (reference_date - anchor).days
|
||
if days_since_anchor < 0:
|
||
# reference_date is before the anchor — use the first period
|
||
return (anchor, anchor + datetime.timedelta(days=period_days - 1))
|
||
|
||
periods_passed = days_since_anchor // period_days
|
||
period_start = anchor + datetime.timedelta(days=periods_passed * period_days)
|
||
period_end = period_start + datetime.timedelta(days=period_days - 1)
|
||
return (period_start, period_end)
|
||
|
||
# === MONTHLY ===
|
||
# Step through calendar months from the anchor's day-of-month.
|
||
# E.g., anchor = Jan 15 means periods are: Jan 15–Feb 14, Feb 15–Mar 14, etc.
|
||
elif team.pay_frequency == 'monthly':
|
||
anchor_day = anchor.day
|
||
current_start = anchor
|
||
|
||
# Walk forward month by month until we find the period containing today
|
||
for _ in range(120): # Safety limit — 10 years of months
|
||
if current_start.month == 12:
|
||
next_month, next_year = 1, current_start.year + 1
|
||
else:
|
||
next_month, next_year = current_start.month + 1, current_start.year
|
||
|
||
# Clamp anchor day to the max days in that month (e.g., 31 → 28 for Feb)
|
||
max_day = cal_module.monthrange(next_year, next_month)[1]
|
||
next_start = datetime.date(next_year, next_month, min(anchor_day, max_day))
|
||
current_end = next_start - datetime.timedelta(days=1)
|
||
|
||
if reference_date <= current_end:
|
||
return (current_start, current_end)
|
||
current_start = next_start
|
||
|
||
return (None, None)
|
||
|
||
|
||
# =============================================================================
|
||
# === OUTSTANDING PAYMENTS — SHARED HELPER ===
|
||
# Used by the home dashboard AND the payroll report. Computes:
|
||
# - outstanding_payments: Decimal total (unpaid wages + net unpaid adjustments)
|
||
# - unpaid_wages: Decimal (pure daily rates for unpaid workers)
|
||
# - pending_adj_add: Decimal (unpaid additive adjustments, e.g. bonuses)
|
||
# - pending_adj_sub: Decimal (unpaid deductive adjustments, e.g. loan repayments)
|
||
# - outstanding_by_project: dict[str project_name -> Decimal amount]
|
||
#
|
||
# Accepts optional project_ids / team_ids filters. Empty list or None = no filter.
|
||
# =============================================================================
|
||
|
||
def _compute_outstanding(project_ids=None, team_ids=None):
|
||
"""Return current-moment outstanding payment breakdown.
|
||
|
||
Plain-English: for each work log that hasn't been fully paid, adds up
|
||
each unpaid worker's daily rate. Then adds unpaid additive adjustments
|
||
(bonuses, overtime, new loans, advances) and subtracts unpaid deductive
|
||
adjustments (deductions, loan/advance repayments). Results are the
|
||
"as of right now" snapshot shown on the home dashboard's Outstanding
|
||
Payments card. Optional filters scope the answer to specific projects
|
||
and/or teams.
|
||
"""
|
||
# --- Work logs in scope ---
|
||
work_logs = WorkLog.objects.select_related('project').prefetch_related('workers', 'payroll_records')
|
||
if project_ids:
|
||
work_logs = work_logs.filter(project_id__in=project_ids)
|
||
if team_ids:
|
||
work_logs = work_logs.filter(team_id__in=team_ids)
|
||
|
||
unpaid_wages = Decimal('0.00')
|
||
outstanding_by_project = {}
|
||
|
||
for wl in work_logs:
|
||
paid_worker_ids = {pr.worker_id for pr in wl.payroll_records.all()}
|
||
project_name = wl.project.name if wl.project else 'No Project'
|
||
for worker in wl.workers.all():
|
||
if worker.id not in paid_worker_ids:
|
||
cost = worker.daily_rate
|
||
unpaid_wages += cost
|
||
outstanding_by_project.setdefault(project_name, Decimal('0.00'))
|
||
outstanding_by_project[project_name] += cost
|
||
|
||
# --- Unpaid adjustments in scope ---
|
||
adj_qs = PayrollAdjustment.objects.filter(payroll_record__isnull=True).select_related('project')
|
||
if project_ids:
|
||
adj_qs = adj_qs.filter(project_id__in=project_ids)
|
||
if team_ids:
|
||
# worker__teams is M2M — use subquery pattern (see CLAUDE.md Django ORM gotcha)
|
||
adj_qs = adj_qs.filter(
|
||
worker__in=Worker.objects.filter(teams__id__in=team_ids).values('id')
|
||
)
|
||
|
||
pending_adj_add = Decimal('0.00')
|
||
pending_adj_sub = Decimal('0.00')
|
||
for adj in adj_qs:
|
||
project_name = adj.project.name if adj.project else 'No Project'
|
||
outstanding_by_project.setdefault(project_name, Decimal('0.00'))
|
||
if adj.type in ADDITIVE_TYPES:
|
||
pending_adj_add += adj.amount
|
||
outstanding_by_project[project_name] += adj.amount
|
||
elif adj.type in DEDUCTIVE_TYPES:
|
||
pending_adj_sub += adj.amount
|
||
outstanding_by_project[project_name] -= adj.amount
|
||
|
||
outstanding_payments = unpaid_wages + pending_adj_add - pending_adj_sub
|
||
|
||
return {
|
||
'outstanding_payments': outstanding_payments,
|
||
'unpaid_wages': unpaid_wages,
|
||
'pending_adj_add': pending_adj_add,
|
||
'pending_adj_sub': pending_adj_sub,
|
||
'outstanding_by_project': outstanding_by_project,
|
||
}
|
||
|
||
|
||
# =============================================================================
|
||
# === COMPANY COST VELOCITY ===
|
||
# Lifetime "what does a typical FoxFitt working day cost us?" metric.
|
||
# Denominator = COUNT(DISTINCT work_log.date) — true working days, not
|
||
# calendar days (rain days, weekends, permit delays don't dilute the rate).
|
||
# Used by the hero KPI band on the payroll report.
|
||
# =============================================================================
|
||
|
||
def _company_cost_velocity():
|
||
"""Return company-wide avg daily and monthly labour cost (lifetime)."""
|
||
# Total lifetime labour cost: sum of (worker.daily_rate) over every
|
||
# (log, worker) pair that has ever been logged.
|
||
total_cost = Decimal('0.00')
|
||
for wl in WorkLog.objects.prefetch_related('workers').all():
|
||
for worker in wl.workers.all():
|
||
total_cost += worker.daily_rate
|
||
|
||
# Distinct work-log dates = working days
|
||
working_days = WorkLog.objects.values('date').distinct().count()
|
||
|
||
if working_days == 0:
|
||
avg_daily = Decimal('0.00')
|
||
else:
|
||
avg_daily = (total_cost / working_days).quantize(Decimal('0.01'))
|
||
|
||
# 30.44 = 365.25 / 12 — standard month-length approximation.
|
||
# Keeps annualised totals correct on average.
|
||
avg_monthly = (avg_daily * Decimal('30.44')).quantize(Decimal('0.01'))
|
||
|
||
return {
|
||
'avg_daily': avg_daily,
|
||
'avg_monthly': avg_monthly,
|
||
'working_days': working_days,
|
||
}
|
||
|
||
|
||
# =============================================================================
|
||
# === CURRENT OUTSTANDING — SCOPED FOR THE REPORT ===
|
||
# Thin wrapper around _compute_outstanding that shapes the output for
|
||
# the executive report's hero card 2. Includes a 'by_project' list
|
||
# sorted by amount desc, ready for direct template rendering.
|
||
# =============================================================================
|
||
|
||
def _current_outstanding_in_scope(project_ids=None, team_ids=None):
|
||
"""Return current outstanding payments, optionally scoped by project/team.
|
||
|
||
Calls _compute_outstanding and reshapes the by_project dict into a
|
||
list sorted by amount descending (for display). The 'total' field
|
||
is the net outstanding (unpaid wages + additive adjustments minus
|
||
deductive adjustments), matching the home dashboard card.
|
||
"""
|
||
raw = _compute_outstanding(project_ids=project_ids, team_ids=team_ids)
|
||
by_project_list = sorted(
|
||
[{'name': name, 'amount': amt} for name, amt in raw['outstanding_by_project'].items()],
|
||
key=lambda r: r['amount'],
|
||
reverse=True,
|
||
)
|
||
return {
|
||
'total': raw['outstanding_payments'],
|
||
'by_project': by_project_list,
|
||
}
|
||
|
||
|
||
# =============================================================================
|
||
# === TEAM × PROJECT ACTIVITY PIVOT ===
|
||
# Chapter IV of the executive report: "how many days did each team work
|
||
# on each project in this period?" Cell value = COUNT(DISTINCT work_log.date).
|
||
# Logs with no team (team IS NULL) are excluded — the pivot is meaningless
|
||
# without a team axis.
|
||
# =============================================================================
|
||
|
||
def _team_project_activity(work_logs_qs):
|
||
"""Return pivot data for team × project activity within a work-logs queryset.
|
||
|
||
Plain-English: for each team-project pair represented in the given
|
||
queryset, counts the number of distinct calendar dates the team worked
|
||
on that project. Rows and columns include only teams/projects that
|
||
actually appeared (zero-activity teams/projects aren't shown).
|
||
"""
|
||
# Narrow to logs that have both a team and a project (we can't pivot
|
||
# on NULL axes; also filters out the "No Project" ghost rows).
|
||
qs = work_logs_qs.filter(team__isnull=False, project__isnull=False)
|
||
|
||
# Aggregate: (team_id, project_id) -> distinct dates
|
||
rows_data = qs.values(
|
||
'team_id', 'team__name', 'project_id', 'project__name'
|
||
).annotate(days=Count('date', distinct=True)).order_by('team__name')
|
||
|
||
# Build column list (unique projects, ordered by name)
|
||
columns_seen = {}
|
||
for r in rows_data:
|
||
columns_seen.setdefault(r['project_id'], r['project__name'])
|
||
columns = [
|
||
{'id': pid, 'name': pname}
|
||
for pid, pname in sorted(columns_seen.items(), key=lambda kv: kv[1])
|
||
]
|
||
|
||
# Build rows: team_id -> cells_by_project_id dict
|
||
rows_by_team = {} # team_id -> {'team_id', 'team_name', 'cells_by_project_id', 'row_total'}
|
||
col_totals = {col['id']: 0 for col in columns}
|
||
grand_total = 0
|
||
|
||
for r in rows_data:
|
||
tid = r['team_id']
|
||
pid = r['project_id']
|
||
days = r['days']
|
||
row = rows_by_team.setdefault(tid, {
|
||
'team_id': tid,
|
||
'team_name': r['team__name'],
|
||
'cells_by_project_id': {},
|
||
'row_total': 0,
|
||
})
|
||
row['cells_by_project_id'][pid] = days
|
||
row['row_total'] += days
|
||
col_totals[pid] += days
|
||
grand_total += days
|
||
|
||
# Ordered rows list (by team name)
|
||
rows = sorted(rows_by_team.values(), key=lambda r: r['team_name'])
|
||
|
||
return {
|
||
'columns': columns,
|
||
'rows': rows,
|
||
'col_totals': col_totals,
|
||
'grand_total': grand_total,
|
||
}
|
||
|
||
|
||
# === HOME DASHBOARD ===
|
||
# The main page users see after logging in. Shows different content
|
||
# depending on whether the user is an admin or supervisor.
|
||
|
||
@login_required
|
||
def index(request):
|
||
user = request.user
|
||
|
||
if is_admin(user):
|
||
# --- ADMIN DASHBOARD ---
|
||
|
||
# === OUTSTANDING BREAKDOWN ===
|
||
# Uses the shared _compute_outstanding helper so the dashboard and the
|
||
# payroll report can't drift. Unscoped (no filters) = whole company.
|
||
_o = _compute_outstanding()
|
||
outstanding_payments = _o['outstanding_payments']
|
||
unpaid_wages = _o['unpaid_wages']
|
||
pending_adjustments_add = _o['pending_adj_add']
|
||
pending_adjustments_sub = _o['pending_adj_sub']
|
||
outstanding_by_project = _o['outstanding_by_project']
|
||
|
||
# Sum total paid out in the last 60 days
|
||
sixty_days_ago = timezone.now().date() - timezone.timedelta(days=60)
|
||
paid_this_month = PayrollRecord.objects.filter(
|
||
date__gte=sixty_days_ago
|
||
).aggregate(total=Sum('amount_paid'))['total'] or Decimal('0.00')
|
||
|
||
# Count and total balance of active loans
|
||
active_loans_qs = Loan.objects.filter(active=True)
|
||
active_loans_count = active_loans_qs.count()
|
||
active_loans_balance = active_loans_qs.aggregate(
|
||
total=Sum('remaining_balance')
|
||
)['total'] or Decimal('0.00')
|
||
|
||
# This week summary
|
||
start_of_week = timezone.now().date() - timezone.timedelta(
|
||
days=timezone.now().date().weekday()
|
||
)
|
||
this_week_logs = WorkLog.objects.filter(date__gte=start_of_week).count()
|
||
|
||
# Recent activity — last 5 work logs
|
||
recent_activity = WorkLog.objects.select_related(
|
||
'project', 'supervisor'
|
||
).prefetch_related('workers').order_by('-date', '-id')[:5]
|
||
|
||
# All workers, projects, and teams for the Manage Resources tab.
|
||
# The template uses a JS filter bar (Active / Inactive / All) to show/hide
|
||
# rows based on data-active attribute — defaults to showing only active items.
|
||
workers = Worker.objects.all().order_by('name')
|
||
projects = Project.objects.all().order_by('name')
|
||
teams = Team.objects.all().order_by('name')
|
||
|
||
# === CERT EXPIRY SUMMARY ===
|
||
# Count certificates that are expired or expire within the next 30 days.
|
||
# Only shown on the dashboard when the count is non-zero (so the stat
|
||
# card disappears when everything is in good standing).
|
||
today = datetime.date.today()
|
||
thirty_days_out = today + datetime.timedelta(days=30)
|
||
certs_expired_count = WorkerCertificate.objects.filter(
|
||
valid_until__lt=today, worker__active=True,
|
||
).count()
|
||
certs_expiring_count = WorkerCertificate.objects.filter(
|
||
valid_until__gte=today, valid_until__lte=thirty_days_out,
|
||
worker__active=True,
|
||
).count()
|
||
certs_alert_total = certs_expired_count + certs_expiring_count
|
||
|
||
context = {
|
||
'is_admin': True,
|
||
'outstanding_payments': outstanding_payments,
|
||
'unpaid_wages': unpaid_wages,
|
||
'pending_adjustments_add': pending_adjustments_add,
|
||
'pending_adjustments_sub': pending_adjustments_sub,
|
||
'paid_this_month': paid_this_month,
|
||
'active_loans_count': active_loans_count,
|
||
'active_loans_balance': active_loans_balance,
|
||
'outstanding_by_project': outstanding_by_project,
|
||
'this_week_logs': this_week_logs,
|
||
'recent_activity': recent_activity,
|
||
'workers': workers,
|
||
'projects': projects,
|
||
'teams': teams,
|
||
# Cert-expiry card (rendered only when > 0)
|
||
'certs_expired_count': certs_expired_count,
|
||
'certs_expiring_count': certs_expiring_count,
|
||
'certs_alert_total': certs_alert_total,
|
||
# Empty on the home dashboard — modal opens clean (no pre-selected filters)
|
||
'selected_project_ids': [],
|
||
'selected_team_ids': [],
|
||
}
|
||
return render(request, 'core/index.html', context)
|
||
|
||
else:
|
||
# --- SUPERVISOR DASHBOARD ---
|
||
|
||
# Count projects this supervisor is assigned to
|
||
my_projects_count = user.assigned_projects.filter(active=True).count()
|
||
|
||
# Count teams this supervisor manages
|
||
my_teams_count = user.supervised_teams.filter(active=True).count()
|
||
|
||
# Count unique workers across all their teams
|
||
my_workers_count = Worker.objects.filter(
|
||
active=True,
|
||
teams__supervisor=user,
|
||
teams__active=True
|
||
).distinct().count()
|
||
|
||
# This week summary — only their own logs
|
||
start_of_week = timezone.now().date() - timezone.timedelta(
|
||
days=timezone.now().date().weekday()
|
||
)
|
||
this_week_logs = WorkLog.objects.filter(
|
||
date__gte=start_of_week, supervisor=user
|
||
).count()
|
||
|
||
# Their last 5 work logs
|
||
recent_activity = WorkLog.objects.filter(
|
||
supervisor=user
|
||
).select_related('project').prefetch_related('workers').order_by('-date', '-id')[:5]
|
||
|
||
context = {
|
||
'is_admin': False,
|
||
'my_projects_count': my_projects_count,
|
||
'my_teams_count': my_teams_count,
|
||
'my_workers_count': my_workers_count,
|
||
'this_week_logs': this_week_logs,
|
||
'recent_activity': recent_activity,
|
||
}
|
||
return render(request, 'core/index.html', context)
|
||
|
||
|
||
# === ATTENDANCE LOGGING ===
|
||
# This is where supervisors log which workers showed up to work each day.
|
||
# Supports logging a single day or a date range (e.g. a whole week).
|
||
# Includes conflict detection to prevent double-logging workers.
|
||
|
||
@login_required
|
||
def attendance_log(request):
|
||
user = request.user
|
||
|
||
if request.method == 'POST':
|
||
form = AttendanceLogForm(request.POST, user=user)
|
||
|
||
if form.is_valid():
|
||
start_date = form.cleaned_data['date']
|
||
end_date = form.cleaned_data.get('end_date') or start_date
|
||
include_saturday = form.cleaned_data.get('include_saturday', False)
|
||
include_sunday = form.cleaned_data.get('include_sunday', False)
|
||
project = form.cleaned_data['project']
|
||
team = form.cleaned_data.get('team')
|
||
workers = form.cleaned_data['workers']
|
||
overtime_amount = form.cleaned_data['overtime_amount']
|
||
notes = form.cleaned_data.get('notes', '')
|
||
|
||
# --- Build list of dates to log ---
|
||
# Go through each day from start to end, skipping weekends
|
||
# unless the user checked the "Include Saturday/Sunday" boxes
|
||
dates_to_log = []
|
||
current_date = start_date
|
||
while current_date <= end_date:
|
||
day_of_week = current_date.weekday() # 0=Mon, 5=Sat, 6=Sun
|
||
if day_of_week == 5 and not include_saturday:
|
||
current_date += datetime.timedelta(days=1)
|
||
continue
|
||
if day_of_week == 6 and not include_sunday:
|
||
current_date += datetime.timedelta(days=1)
|
||
continue
|
||
dates_to_log.append(current_date)
|
||
current_date += datetime.timedelta(days=1)
|
||
|
||
if not dates_to_log:
|
||
messages.warning(request, 'No valid dates in the selected range.')
|
||
# Still need team_workers_json for the JS even on error re-render
|
||
tw_map = {}
|
||
for t in Team.objects.filter(active=True).prefetch_related('workers'):
|
||
tw_map[t.id] = list(t.workers.filter(active=True).values_list('id', flat=True))
|
||
return render(request, 'core/attendance_log.html', {
|
||
'form': form,
|
||
'is_admin': is_admin(user),
|
||
'team_workers_json': json.dumps(tw_map),
|
||
})
|
||
|
||
# --- Conflict detection ---
|
||
# Check if any selected workers already have a WorkLog on any of these dates
|
||
worker_ids = list(workers.values_list('id', flat=True))
|
||
existing_logs = WorkLog.objects.filter(
|
||
date__in=dates_to_log,
|
||
workers__id__in=worker_ids
|
||
).prefetch_related('workers').select_related('project')
|
||
|
||
conflicts = []
|
||
for log in existing_logs:
|
||
for w in log.workers.all():
|
||
if w.id in worker_ids:
|
||
conflicts.append({
|
||
'worker_name': w.name,
|
||
'date': log.date,
|
||
'project_name': log.project.name,
|
||
})
|
||
|
||
# If there are conflicts and the user hasn't chosen what to do yet
|
||
conflict_action = request.POST.get('conflict_action', '')
|
||
if conflicts and not conflict_action:
|
||
# Show the conflict warning — let user choose Skip or Overwrite
|
||
# Still need team_workers_json for the JS even on conflict re-render
|
||
tw_map = {}
|
||
for t in Team.objects.filter(active=True).prefetch_related('workers'):
|
||
tw_map[t.id] = list(t.workers.filter(active=True).values_list('id', flat=True))
|
||
|
||
# Pass the selected worker IDs explicitly for the conflict
|
||
# re-submission forms. We can't use form.data.workers in the
|
||
# template because QueryDict.__getitem__ returns only the last
|
||
# value, losing all other selections for multi-value fields.
|
||
selected_worker_ids = request.POST.getlist('workers')
|
||
|
||
return render(request, 'core/attendance_log.html', {
|
||
'form': form,
|
||
'conflicts': conflicts,
|
||
'is_admin': is_admin(user),
|
||
'team_workers_json': json.dumps(tw_map),
|
||
'selected_worker_ids': selected_worker_ids,
|
||
})
|
||
|
||
# --- Create work logs ---
|
||
created_count = 0
|
||
skipped_count = 0
|
||
# Track the IDs of work logs we just created so we can redirect
|
||
# the supervisor to the site-report form for the most recent one.
|
||
created_log_ids = []
|
||
|
||
for log_date in dates_to_log:
|
||
# Check which workers already have a log on this date
|
||
workers_with_existing = set(
|
||
WorkLog.objects.filter(
|
||
date=log_date,
|
||
workers__id__in=worker_ids
|
||
).values_list('workers__id', flat=True)
|
||
)
|
||
|
||
if conflict_action == 'overwrite':
|
||
# Remove conflicting workers from their existing logs
|
||
conflicting_logs = WorkLog.objects.filter(
|
||
date=log_date,
|
||
workers__id__in=worker_ids
|
||
)
|
||
for existing_log in conflicting_logs:
|
||
for w_id in worker_ids:
|
||
existing_log.workers.remove(w_id)
|
||
workers_to_add = workers
|
||
elif conflict_action == 'skip':
|
||
# Skip workers who already have logs on this date
|
||
workers_to_add = workers.exclude(id__in=workers_with_existing)
|
||
skipped_count += len(workers_with_existing & set(worker_ids))
|
||
else:
|
||
# No conflicts, or first submission — add all workers
|
||
workers_to_add = workers
|
||
|
||
if workers_to_add.exists():
|
||
# Create the WorkLog record
|
||
work_log = WorkLog.objects.create(
|
||
date=log_date,
|
||
project=project,
|
||
team=team,
|
||
supervisor=user, # Auto-set to logged-in user
|
||
overtime_amount=overtime_amount,
|
||
notes=notes,
|
||
)
|
||
work_log.workers.set(workers_to_add)
|
||
created_count += 1
|
||
created_log_ids.append(work_log.id)
|
||
|
||
# Show success message
|
||
if created_count > 0:
|
||
msg = f'Successfully created {created_count} work log(s).'
|
||
if skipped_count > 0:
|
||
msg += f' Skipped {skipped_count} conflicts.'
|
||
messages.success(request, msg)
|
||
else:
|
||
messages.warning(request, 'No work logs created — all entries were conflicts.')
|
||
|
||
# Two-step flow: after attendance, send the supervisor to the
|
||
# site-report form so they can log progress + weather while it's
|
||
# fresh in their head. The form has a "Skip" link to home for
|
||
# supervisors who're in a hurry. If we created NO logs, fall
|
||
# back to the old behaviour and just go home.
|
||
if created_log_ids:
|
||
# Redirect to the report for the LAST created log (most
|
||
# recent date when a date range was used — typically today
|
||
# for single-day entries).
|
||
return redirect('site_report_edit', work_log_id=created_log_ids[-1])
|
||
return redirect('home')
|
||
else:
|
||
# Don't pre-fill the start date — force the user to pick one
|
||
# so they don't accidentally log work on the wrong day
|
||
form = AttendanceLogForm(user=user)
|
||
|
||
# Build a list of worker data for the estimated cost JavaScript
|
||
# (admins only — supervisors don't see the cost card)
|
||
worker_rates = {}
|
||
if is_admin(user):
|
||
for w in Worker.objects.filter(active=True):
|
||
worker_rates[str(w.id)] = str(w.daily_rate)
|
||
|
||
# Build team→workers mapping so the JS can auto-check workers when a
|
||
# team is selected from the dropdown. Key = team ID, Value = list of worker IDs.
|
||
team_workers_map = {}
|
||
teams_qs = Team.objects.filter(active=True).prefetch_related('workers')
|
||
if not is_admin(user):
|
||
# Supervisors only see their own teams
|
||
teams_qs = teams_qs.filter(supervisor=user)
|
||
for team in teams_qs:
|
||
active_worker_ids = list(
|
||
team.workers.filter(active=True).values_list('id', flat=True)
|
||
)
|
||
team_workers_map[team.id] = active_worker_ids
|
||
|
||
return render(request, 'core/attendance_log.html', {
|
||
'form': form,
|
||
'is_admin': is_admin(user),
|
||
'worker_rates_json': worker_rates,
|
||
'team_workers_json': json.dumps(team_workers_map),
|
||
})
|
||
|
||
|
||
# =============================================================================
|
||
# === SITE REPORT (DAILY PROGRESS) ===
|
||
# Two-step companion to attendance logging. After a supervisor logs WHO
|
||
# worked today, they're redirected to log WHAT was done (counts, checks,
|
||
# weather, free-form notes). Optional — they can skip via the link in
|
||
# the form template.
|
||
# =============================================================================
|
||
|
||
|
||
def _can_access_site_report(user, work_log):
|
||
"""Permission check for SiteReport edit + detail views.
|
||
|
||
Anyone who can see the parent WorkLog can see/edit its site report:
|
||
- Admins (is_staff or is_superuser): all logs
|
||
- Supervisors: logs they own (supervisor=user) OR logs whose
|
||
project they're assigned to (project.supervisors contains user)
|
||
|
||
Mirrors the queryset filter in `work_history()` so the two stay in
|
||
sync. Returns True if the user is allowed; the calling view should
|
||
return HttpResponseForbidden if False.
|
||
"""
|
||
if is_admin(user):
|
||
return True
|
||
if work_log.supervisor_id == user.id:
|
||
return True
|
||
if work_log.project.supervisors.filter(id=user.id).exists():
|
||
return True
|
||
return False
|
||
|
||
|
||
@login_required
|
||
def site_report_edit(request, work_log_id):
|
||
"""Create-or-update a SiteReport for the given WorkLog.
|
||
|
||
URL: /site-report/<work_log_id>/edit/
|
||
Permission: admin, or the WorkLog's supervisor / project supervisor.
|
||
Behaviour:
|
||
- GET on a WorkLog WITHOUT an existing report → blank form
|
||
- GET on a WorkLog WITH a report → form pre-filled with current values
|
||
- POST → save, flash a toast, redirect home
|
||
The form has a "Skip" link in the template that goes straight home.
|
||
"""
|
||
work_log = get_object_or_404(
|
||
WorkLog.objects.select_related('project', 'team', 'supervisor'),
|
||
id=work_log_id,
|
||
)
|
||
|
||
if not _can_access_site_report(request.user, work_log):
|
||
return HttpResponseForbidden(
|
||
"You don't have permission to edit this site report."
|
||
)
|
||
|
||
# Pull the existing report if there is one. The 1:1 reverse accessor
|
||
# raises DoesNotExist when absent, so we wrap with a try/except (a
|
||
# bit cleaner than hasattr() given the field name).
|
||
try:
|
||
site_report = work_log.site_report
|
||
except SiteReport.DoesNotExist:
|
||
site_report = None
|
||
|
||
if request.method == 'POST':
|
||
form = SiteReportForm(
|
||
request.POST,
|
||
instance=site_report, # None = create new
|
||
work_log=work_log, # only used when creating
|
||
)
|
||
if form.is_valid():
|
||
instance = form.save(commit=False)
|
||
# Stamp `created_by` only on first save — keep the original
|
||
# author even if an admin edits it later. updated_at is
|
||
# handled automatically by auto_now=True on the model.
|
||
if instance.pk is None:
|
||
instance.created_by = request.user
|
||
instance.save()
|
||
messages.success(
|
||
request,
|
||
f"Site report saved for {work_log.project.name} on {work_log.date:%d %b %Y}.",
|
||
)
|
||
return redirect('home')
|
||
# Form validation errors fall through to render(...) below
|
||
else:
|
||
form = SiteReportForm(instance=site_report, work_log=work_log)
|
||
|
||
# Build (metric_dict, bound_field) pairs for the template. The form's
|
||
# dynamic fields are named "count_<key>" / "check_<key>" — the
|
||
# template iterates these pairs rather than calling form[name] with
|
||
# a variable key (which Django templates can't do without a custom
|
||
# filter). Pre-building here keeps the template clean.
|
||
count_field_pairs = [
|
||
(m, form[f"count_{m['key']}"]) for m in COUNT_METRICS
|
||
]
|
||
check_field_pairs = [
|
||
(m, form[f"check_{m['key']}"]) for m in CHECK_METRICS
|
||
]
|
||
|
||
return render(request, 'core/site_report_edit.html', {
|
||
'form': form,
|
||
'work_log': work_log,
|
||
'site_report': site_report,
|
||
'count_field_pairs': count_field_pairs,
|
||
'check_field_pairs': check_field_pairs,
|
||
'is_creating': site_report is None,
|
||
})
|
||
|
||
|
||
@login_required
|
||
def site_report_detail(request, work_log_id):
|
||
"""Read-only view of a SiteReport.
|
||
|
||
URL: /site-report/<work_log_id>/
|
||
Permission: same scope as site_report_edit.
|
||
Behaviour:
|
||
- 404 if there's no report for this work log (use the edit URL
|
||
to create one — the link in templates points to the right URL
|
||
based on whether a report exists).
|
||
"""
|
||
work_log = get_object_or_404(
|
||
WorkLog.objects.select_related('project', 'team', 'supervisor'),
|
||
id=work_log_id,
|
||
)
|
||
|
||
if not _can_access_site_report(request.user, work_log):
|
||
return HttpResponseForbidden(
|
||
"You don't have permission to view this site report."
|
||
)
|
||
|
||
site_report = get_object_or_404(SiteReport, work_log=work_log)
|
||
|
||
# Build a flat list of (label, value) pairs for the template — this
|
||
# is the easiest way to render historic JSON keys whose label might
|
||
# have been retired from the schema (label_for falls back to the key).
|
||
counts_display = []
|
||
for key, value in (site_report.metrics.get('counts', {}) or {}).items():
|
||
counts_display.append({'key': key, 'label': label_for(key), 'value': value})
|
||
|
||
checks_display = []
|
||
for key, value in (site_report.metrics.get('checks', {}) or {}).items():
|
||
checks_display.append({'key': key, 'label': label_for(key), 'value': bool(value)})
|
||
|
||
return render(request, 'core/site_report_detail.html', {
|
||
'work_log': work_log,
|
||
'site_report': site_report,
|
||
'counts_display': counts_display,
|
||
'checks_display': checks_display,
|
||
})
|
||
|
||
|
||
# === WORK LOG HISTORY ===
|
||
# Shows work logs in two modes: a table list or a monthly calendar grid.
|
||
# Supervisors only see their own projects. Admins see everything.
|
||
# The calendar view groups logs by day and lets you click a day to see details.
|
||
|
||
@login_required
|
||
def work_history(request):
|
||
user = request.user
|
||
|
||
# Start with base queryset
|
||
# NOTE: select_related('site_report') prevents an N+1 query when
|
||
# the template checks `log.site_report` for the indicator icon.
|
||
# The 1:1 reverse relation is lazy by default — without this hint,
|
||
# each row would issue a separate SELECT.
|
||
if is_admin(user):
|
||
logs = WorkLog.objects.select_related('site_report')
|
||
else:
|
||
# Supervisors only see logs for their projects
|
||
logs = WorkLog.objects.select_related('site_report').filter(
|
||
Q(supervisor=user) | Q(project__supervisors=user)
|
||
).distinct()
|
||
|
||
# --- Filters ---
|
||
# Read filter values from the URL query string.
|
||
# Validate numeric params to prevent 500 errors from bad/malformed URLs.
|
||
worker_filter = request.GET.get('worker', '')
|
||
project_filter = request.GET.get('project', '')
|
||
status_filter = request.GET.get('status', '')
|
||
|
||
# Validate: worker and project must be numeric IDs (or empty)
|
||
try:
|
||
worker_filter = str(int(worker_filter)) if worker_filter else ''
|
||
except (ValueError, TypeError):
|
||
worker_filter = ''
|
||
try:
|
||
project_filter = str(int(project_filter)) if project_filter else ''
|
||
except (ValueError, TypeError):
|
||
project_filter = ''
|
||
|
||
# Count total logs BEFORE filtering (so we can show "X of Y" to the user)
|
||
total_log_count = logs.count()
|
||
|
||
if worker_filter:
|
||
logs = logs.filter(workers__id=worker_filter).distinct()
|
||
|
||
if project_filter:
|
||
logs = logs.filter(project__id=project_filter)
|
||
|
||
if status_filter == 'paid':
|
||
# "Paid" = has at least one PayrollRecord linked
|
||
logs = logs.filter(payroll_records__isnull=False).distinct()
|
||
elif status_filter == 'unpaid':
|
||
# "Unpaid" = has no PayrollRecord linked
|
||
logs = logs.filter(payroll_records__isnull=True)
|
||
|
||
# Track whether any filter is active (for showing feedback in the template)
|
||
has_active_filters = bool(worker_filter or project_filter or status_filter)
|
||
|
||
# Count filtered results BEFORE adding joins (more efficient SQL)
|
||
filtered_log_count = logs.count() if has_active_filters else 0
|
||
|
||
# If filtering by worker, look up the Worker object so the template can
|
||
# show just that worker's name instead of all workers on the log.
|
||
filtered_worker_obj = None
|
||
if worker_filter:
|
||
filtered_worker_obj = Worker.objects.filter(id=worker_filter).first()
|
||
|
||
# Add related data and order by date (newest first)
|
||
logs = logs.select_related(
|
||
'project', 'supervisor'
|
||
).prefetch_related('workers', 'payroll_records').order_by('-date', '-id')
|
||
|
||
# Get filter options for the dropdowns
|
||
if is_admin(user):
|
||
filter_workers = Worker.objects.filter(active=True).order_by('name')
|
||
filter_projects = Project.objects.filter(active=True).order_by('name')
|
||
else:
|
||
supervised_teams = Team.objects.filter(supervisor=user, active=True)
|
||
filter_workers = Worker.objects.filter(
|
||
active=True, teams__in=supervised_teams
|
||
).distinct().order_by('name')
|
||
filter_projects = Project.objects.filter(
|
||
active=True, supervisors=user
|
||
).order_by('name')
|
||
|
||
# --- View mode: list or calendar ---
|
||
view_mode = request.GET.get('view', 'list')
|
||
today = timezone.now().date()
|
||
|
||
# Build a query string that preserves all current filters
|
||
# (used by the List/Calendar toggle links to keep filters when switching)
|
||
filter_params = ''
|
||
if worker_filter:
|
||
filter_params += '&worker=' + worker_filter
|
||
if project_filter:
|
||
filter_params += '&project=' + project_filter
|
||
if status_filter:
|
||
filter_params += '&status=' + status_filter
|
||
|
||
context = {
|
||
'logs': logs,
|
||
'filter_workers': filter_workers,
|
||
'filter_projects': filter_projects,
|
||
'selected_worker': worker_filter,
|
||
'selected_project': project_filter,
|
||
'selected_status': status_filter,
|
||
'is_admin': is_admin(user),
|
||
'view_mode': view_mode,
|
||
'filter_params': filter_params,
|
||
'has_active_filters': has_active_filters,
|
||
'total_log_count': total_log_count,
|
||
'filtered_log_count': filtered_log_count,
|
||
'filtered_worker_obj': filtered_worker_obj,
|
||
}
|
||
|
||
# === CALENDAR MODE ===
|
||
# Build a monthly grid of days, each containing the work logs for that day.
|
||
# Also build a JSON object keyed by date string for the JavaScript
|
||
# click-to-see-details panel.
|
||
if view_mode == 'calendar':
|
||
# Get target month from URL (default: current month)
|
||
try:
|
||
target_year = int(request.GET.get('year', today.year))
|
||
target_month = int(request.GET.get('month', today.month))
|
||
if not (1 <= target_month <= 12):
|
||
target_year, target_month = today.year, today.month
|
||
except (ValueError, TypeError):
|
||
target_year, target_month = today.year, today.month
|
||
|
||
# Build the calendar grid using Python's calendar module.
|
||
# monthdatescalendar() returns a list of weeks, where each week is
|
||
# a list of 7 datetime.date objects (including overflow from prev/next month).
|
||
cal = cal_module.Calendar(firstweekday=0) # Week starts on Monday
|
||
month_dates = cal.monthdatescalendar(target_year, target_month)
|
||
|
||
# Get the full date range for the calendar grid (includes overflow days)
|
||
first_display_date = month_dates[0][0]
|
||
last_display_date = month_dates[-1][-1]
|
||
|
||
# Filter logs to only this date range (improves performance)
|
||
month_logs = logs.filter(date__range=[first_display_date, last_display_date])
|
||
|
||
# Group logs by date string for quick lookup
|
||
logs_by_date = {}
|
||
for log in month_logs:
|
||
date_key = log.date.isoformat()
|
||
if date_key not in logs_by_date:
|
||
logs_by_date[date_key] = []
|
||
logs_by_date[date_key].append(log)
|
||
|
||
# Build the calendar_weeks structure that the template iterates over.
|
||
# Each day cell has: date, day number, whether it's the current month,
|
||
# a list of log objects, and a count badge number.
|
||
calendar_weeks = []
|
||
for week in month_dates:
|
||
week_data = []
|
||
for day in week:
|
||
date_key = day.isoformat()
|
||
day_logs = logs_by_date.get(date_key, [])
|
||
week_data.append({
|
||
'date': day,
|
||
'day': day.day,
|
||
'is_current_month': day.month == target_month,
|
||
'is_today': day == today,
|
||
'records': day_logs,
|
||
'count': len(day_logs),
|
||
})
|
||
calendar_weeks.append(week_data)
|
||
|
||
# Build detail data for JavaScript — when you click a day cell,
|
||
# the JS reads this JSON to populate the detail panel below the calendar.
|
||
# NOTE: Pass raw Python dict, not json.dumps() — the template's
|
||
# |json_script filter handles serialization.
|
||
#
|
||
# IMPORTANT: When a worker filter is active, log.workers.all() would
|
||
# still return ALL workers on that WorkLog (not just the filtered one).
|
||
# We need to narrow the displayed workers to match the filter.
|
||
calendar_detail = {}
|
||
for date_key, day_logs in logs_by_date.items():
|
||
calendar_detail[date_key] = []
|
||
for log in day_logs:
|
||
# Get the workers to show — if filtering by worker,
|
||
# only show that worker (not everyone else on the log)
|
||
if worker_filter:
|
||
display_workers = [
|
||
w for w in log.workers.all()
|
||
if str(w.id) == worker_filter
|
||
]
|
||
else:
|
||
display_workers = list(log.workers.all())
|
||
|
||
entry = {
|
||
'project': log.project.name,
|
||
'workers': [w.name for w in display_workers],
|
||
'supervisor': (
|
||
log.supervisor.get_full_name() or log.supervisor.username
|
||
) if log.supervisor else '-',
|
||
'notes': log.notes or '',
|
||
'is_paid': log.payroll_records.exists(),
|
||
'overtime': log.get_overtime_amount_display() if log.overtime_amount > 0 else '',
|
||
}
|
||
# Only show cost data to admins — use filtered workers for amount
|
||
if is_admin(user):
|
||
entry['amount'] = float(
|
||
sum(w.daily_rate for w in display_workers)
|
||
)
|
||
calendar_detail[date_key].append(entry)
|
||
|
||
# Calculate previous/next month for navigation arrows
|
||
if target_month == 1:
|
||
prev_year, prev_month = target_year - 1, 12
|
||
else:
|
||
prev_year, prev_month = target_year, target_month - 1
|
||
|
||
if target_month == 12:
|
||
next_year, next_month = target_year + 1, 1
|
||
else:
|
||
next_year, next_month = target_year, target_month + 1
|
||
|
||
month_name = datetime.date(target_year, target_month, 1).strftime('%B %Y')
|
||
|
||
context.update({
|
||
'calendar_weeks': calendar_weeks,
|
||
'calendar_detail': calendar_detail,
|
||
'curr_year': target_year,
|
||
'curr_month': target_month,
|
||
'month_name': month_name,
|
||
'prev_year': prev_year,
|
||
'prev_month': prev_month,
|
||
'next_year': next_year,
|
||
'next_month': next_month,
|
||
})
|
||
|
||
return render(request, 'core/work_history.html', context)
|
||
|
||
|
||
# =============================================================================
|
||
# === WORK LOG PAYROLL CROSS-LINK ===
|
||
# From any historic work log, see which workers got paid, which didn't, and
|
||
# (for paid ones) which payslip it was. Admin-only; supervisors never see
|
||
# payroll data. Two endpoints share one helper so the modal and the full
|
||
# page can never drift apart.
|
||
# =============================================================================
|
||
|
||
def _build_work_log_payroll_context(log):
|
||
"""Return a context dict describing the payroll status of a work log.
|
||
|
||
Plain-English summary for future-you:
|
||
For the given work log, loop over each worker on it and decide which of
|
||
three buckets they fall into:
|
||
- "Paid" -> a PayrollRecord links this worker + this log
|
||
- "Priced, not paid" -> worker is in log.priced_workers but no record yet
|
||
- "Unpaid" -> neither
|
||
Also collects any PayrollAdjustments tied to this log (e.g. overtime).
|
||
Used by the AJAX endpoint AND the full detail page — keep them sharing
|
||
this helper so they can never show different data.
|
||
"""
|
||
# Prefetch payroll records once, rather than re-querying per worker.
|
||
payroll_records = list(
|
||
PayrollRecord.objects.filter(work_logs=log).select_related('worker')
|
||
)
|
||
# Lookup: worker_id -> first PayrollRecord found.
|
||
record_by_worker = {r.worker_id: r for r in payroll_records}
|
||
|
||
# IDs of workers who've been priced on this log but aren't necessarily paid yet.
|
||
priced_worker_ids = set(log.priced_workers.values_list('id', flat=True))
|
||
|
||
worker_rows = []
|
||
total_earned = Decimal('0.00')
|
||
total_paid = Decimal('0.00')
|
||
total_outstanding = Decimal('0.00')
|
||
|
||
# Loop each worker on the log and classify them into one of three buckets.
|
||
for worker in log.workers.all():
|
||
record = record_by_worker.get(worker.id)
|
||
if record:
|
||
status = 'Paid'
|
||
earned = worker.daily_rate
|
||
total_paid += earned
|
||
elif worker.id in priced_worker_ids:
|
||
status = 'Priced, not paid'
|
||
earned = worker.daily_rate
|
||
total_outstanding += earned
|
||
else:
|
||
status = 'Unpaid'
|
||
earned = worker.daily_rate
|
||
total_outstanding += earned
|
||
|
||
total_earned += earned
|
||
|
||
worker_rows.append({
|
||
'worker': worker,
|
||
'status': status,
|
||
'earned': earned,
|
||
'payroll_record': record,
|
||
'paid_date': record.date if record else None,
|
||
})
|
||
|
||
# Adjustments tied directly to this log (mostly overtime pricing).
|
||
# Reverse accessor is adjustments_by_work_log (see PayrollAdjustment.work_log related_name).
|
||
adjustments = list(
|
||
log.adjustments_by_work_log
|
||
.select_related('worker', 'payroll_record')
|
||
.order_by('type', 'id')
|
||
)
|
||
|
||
# Pay-period info (only if the team has a schedule configured).
|
||
# Use the log's own date as the reference so we report the period the
|
||
# log falls into — not whichever period happens to contain "today".
|
||
pay_period = get_pay_period(log.team, reference_date=log.date) if log.team else (None, None)
|
||
|
||
# Overtime "needs pricing" flag: log has OT hours but no priced_workers yet.
|
||
# log.overtime_amount is a Decimal with default=0.00 — always present on saved
|
||
# instances, so no defensive getattr needed. Compare via Decimal arithmetic.
|
||
log_overtime = log.overtime_amount or Decimal('0.00')
|
||
overtime_needs_pricing = log_overtime > 0 and not priced_worker_ids
|
||
|
||
return {
|
||
'log': log,
|
||
'worker_rows': worker_rows,
|
||
'adjustments': adjustments,
|
||
'total_earned': total_earned,
|
||
'total_paid': total_paid,
|
||
'total_outstanding': total_outstanding,
|
||
'pay_period': pay_period,
|
||
'overtime_needs_pricing': overtime_needs_pricing,
|
||
}
|
||
|
||
|
||
@login_required
|
||
def work_log_payroll_ajax(request, log_id):
|
||
"""Return JSON describing the payroll status of a work log.
|
||
|
||
Admin-only. The modal's JS builds its DOM from this JSON using
|
||
textContent/createElement (matches the worker_lookup_ajax pattern).
|
||
"""
|
||
# Only admins can see this data (salaries, adjustments, etc.)
|
||
if not is_admin(request.user):
|
||
return JsonResponse({'error': 'Not authorized'}, status=403)
|
||
|
||
# Fetch the log with related objects pre-loaded to avoid extra queries
|
||
log = get_object_or_404(
|
||
WorkLog.objects.select_related('project', 'team', 'supervisor'),
|
||
id=log_id,
|
||
)
|
||
# Shared helper also used by the full-page view (Task 4) — keeps the
|
||
# JSON payload and the HTML view in perfect sync.
|
||
ctx = _build_work_log_payroll_context(log)
|
||
|
||
# === SERIALIZE FOR JSON ===
|
||
# JSON can't represent Decimals or dates natively, so we convert:
|
||
# - Decimal -> float (JS does math in floats anyway)
|
||
# - date -> ISO 8601 string ("2026-04-10")
|
||
def _date_iso(d):
|
||
return d.strftime('%Y-%m-%d') if d else None
|
||
|
||
# One dict per worker row — small, hand-picked fields the modal needs.
|
||
worker_rows = [{
|
||
'worker_id': row['worker'].id,
|
||
'worker_name': row['worker'].name,
|
||
'worker_active': row['worker'].active,
|
||
'status': row['status'],
|
||
'earned': float(row['earned']),
|
||
'payroll_record_id': row['payroll_record'].pk if row['payroll_record'] else None,
|
||
'paid_date': _date_iso(row['paid_date']),
|
||
} for row in ctx['worker_rows']]
|
||
|
||
# Adjustments linked directly to this work_log (Overtime, etc.).
|
||
# We emit BOTH 'type' (raw DB value — stable identifier for any JS logic)
|
||
# AND 'type_label' (short display label from get_type_display()) — visible UI.
|
||
adjustments = [{
|
||
'type': adj.type,
|
||
'type_label': adj.get_type_display(),
|
||
'amount': float(adj.amount),
|
||
'worker_id': adj.worker.id,
|
||
'worker_name': adj.worker.name,
|
||
'payroll_record_id': adj.payroll_record.pk if adj.payroll_record else None,
|
||
} for adj in ctx['adjustments']]
|
||
|
||
return JsonResponse({
|
||
'log_id': log.id,
|
||
'date': _date_iso(log.date),
|
||
'project': {'id': log.project.id, 'name': log.project.name} if log.project else None,
|
||
'team': {'id': log.team.id, 'name': log.team.name} if log.team else None,
|
||
# get_full_name() returns "" if no first/last, so fall back to username.
|
||
'supervisor': (log.supervisor.get_full_name() or log.supervisor.username) if log.supervisor else None,
|
||
'worker_rows': worker_rows,
|
||
'adjustments': adjustments,
|
||
'total_earned': float(ctx['total_earned']),
|
||
'total_paid': float(ctx['total_paid']),
|
||
'total_outstanding': float(ctx['total_outstanding']),
|
||
'pay_period_start': _date_iso(ctx['pay_period'][0]),
|
||
'pay_period_end': _date_iso(ctx['pay_period'][1]),
|
||
'overtime_needs_pricing': ctx['overtime_needs_pricing'],
|
||
# Link to the full-page view (Task 4) for the "Open full page" button.
|
||
'full_page_url': reverse('work_log_payroll_detail', args=[log.id]),
|
||
})
|
||
|
||
|
||
@login_required
|
||
def work_log_payroll_detail(request, log_id):
|
||
"""Full-page payroll-status view for a single work log. Admin-only.
|
||
|
||
Shares the exact same context builder as the AJAX endpoint, so the
|
||
full page and the modal can never drift out of sync.
|
||
"""
|
||
# Admin-only: this page shows salary-level data.
|
||
if not is_admin(request.user):
|
||
return HttpResponseForbidden("Admin access required.")
|
||
|
||
# Fetch the log with related objects pre-loaded to avoid extra queries.
|
||
log = get_object_or_404(
|
||
WorkLog.objects.select_related('project', 'team', 'supervisor'),
|
||
id=log_id,
|
||
)
|
||
context = _build_work_log_payroll_context(log)
|
||
return render(request, 'core/work_log_payroll.html', context)
|
||
|
||
|
||
# === CSV EXPORT ===
|
||
# Downloads the filtered work log history as a CSV file.
|
||
# Uses the same filters as the work_history page.
|
||
|
||
@login_required
|
||
def export_work_log_csv(request):
|
||
user = request.user
|
||
|
||
# Build the same queryset as work_history, using the same filters
|
||
if is_admin(user):
|
||
logs = WorkLog.objects.all()
|
||
else:
|
||
logs = WorkLog.objects.filter(
|
||
Q(supervisor=user) | Q(project__supervisors=user)
|
||
).distinct()
|
||
|
||
worker_filter = request.GET.get('worker', '')
|
||
project_filter = request.GET.get('project', '')
|
||
status_filter = request.GET.get('status', '')
|
||
|
||
if worker_filter:
|
||
logs = logs.filter(workers__id=worker_filter).distinct()
|
||
if project_filter:
|
||
logs = logs.filter(project__id=project_filter)
|
||
if status_filter == 'paid':
|
||
logs = logs.filter(payroll_records__isnull=False).distinct()
|
||
elif status_filter == 'unpaid':
|
||
logs = logs.filter(payroll_records__isnull=True)
|
||
|
||
logs = logs.select_related(
|
||
'project', 'supervisor'
|
||
).prefetch_related('workers', 'payroll_records').order_by('-date', '-id')
|
||
|
||
# Create the CSV response
|
||
response = HttpResponse(content_type='text/csv')
|
||
response['Content-Disposition'] = 'attachment; filename="work_log_history.csv"'
|
||
|
||
writer = csv.writer(response)
|
||
writer.writerow(['Date', 'Project', 'Workers', 'Overtime', 'Payment Status', 'Supervisor'])
|
||
|
||
for log in logs:
|
||
worker_names = ', '.join(w.name for w in log.workers.all())
|
||
payment_status = 'Paid' if log.payroll_records.exists() else 'Unpaid'
|
||
overtime_display = log.get_overtime_amount_display() if log.overtime_amount > 0 else 'None'
|
||
supervisor_name = log.supervisor.get_full_name() or log.supervisor.username if log.supervisor else '-'
|
||
|
||
writer.writerow([
|
||
log.date.strftime('%Y-%m-%d'),
|
||
log.project.name,
|
||
worker_names,
|
||
overtime_display,
|
||
payment_status,
|
||
supervisor_name,
|
||
])
|
||
|
||
return response
|
||
|
||
|
||
# === EXPORT WORKERS CSV — FULL WORKER EXPORT ===
|
||
# Downloads every field we have on every worker as a CSV file.
|
||
# Admin-only (supervisors don't have access to salary / ID / banking data).
|
||
#
|
||
# Columns are organised into logical groups so the file reads naturally
|
||
# left-to-right in a spreadsheet:
|
||
# 1. Identity & Pay 2. Banking & Tax
|
||
# 3. Employment & Notes 4. PPE Sizing
|
||
# 5. Driver's License 6. Certifications (one column per type → valid_until date)
|
||
# 7. Warnings summary 8. Activity aggregates (days worked, payslips, total paid)
|
||
#
|
||
# For certs we show one column per cert type with the valid-until date
|
||
# as the value (or "Yes (no expiry)" if the worker has the cert with
|
||
# no expiry set, or empty if they don't hold it). This lets you sort
|
||
# and filter in Excel: "who has a Medical expiring before June?"
|
||
|
||
@login_required
|
||
def export_workers_csv(request):
|
||
"""Export ALL worker data to CSV (profile, banking, PPE, certs, warnings, history)."""
|
||
if not is_admin(request.user):
|
||
return HttpResponseForbidden("Admin access required.")
|
||
|
||
# Pull everything we'll need with prefetching so we don't N+1
|
||
workers = (
|
||
Worker.objects.all()
|
||
.prefetch_related('certificates', 'warnings', 'payroll_records', 'work_logs__project', 'teams')
|
||
.annotate(
|
||
_days_worked=Count('work_logs__date', distinct=True),
|
||
_first_payslip=Min('payroll_records__date'),
|
||
_last_payslip=Max('payroll_records__date'),
|
||
_total_paid=Sum('payroll_records__amount_paid'),
|
||
_payslip_count=Count('payroll_records', distinct=True),
|
||
)
|
||
.order_by('name')
|
||
)
|
||
|
||
# Cert types in the order we want them to appear in the CSV
|
||
cert_types = [
|
||
('skills', 'Skills Cert'),
|
||
('pdp', 'PDP'),
|
||
('first_aid', 'First Aid'),
|
||
('medical', 'Medical'),
|
||
('work_at_height', 'Work at Height'),
|
||
]
|
||
|
||
response = HttpResponse(content_type='text/csv; charset=utf-8')
|
||
response['Content-Disposition'] = 'attachment; filename="workers_full_export.csv"'
|
||
|
||
writer = csv.writer(response)
|
||
writer.writerow([
|
||
# Identity & pay
|
||
'Name', 'ID Number', 'Phone Number', 'Monthly Salary', 'Daily Rate',
|
||
# Banking & tax
|
||
'Tax No', 'UIF', 'Bank', 'Acc No.',
|
||
# Employment & notes
|
||
'Employment Date', 'Active', 'Notes',
|
||
# PPE sizing
|
||
'Shoe Size', 'Overall Top Size', 'Pants Size', 'T-Shirt Size',
|
||
# Driver's License
|
||
'Has Drivers License', 'License Code',
|
||
# Certifications — one column per type, value = valid_until date
|
||
*(f'{label} Valid Until' for _code, label in cert_types),
|
||
# Warnings
|
||
'Total Warnings', 'Last Warning Date', 'Last Warning Severity',
|
||
# Activity aggregates (lifetime)
|
||
'Days Worked', 'Projects Worked On', 'Teams',
|
||
'First Payslip', 'Last Payslip', 'Payslip Count', 'Total Paid Lifetime',
|
||
])
|
||
|
||
for w in workers:
|
||
# --- Build a cert-type → valid_until lookup for this worker ---
|
||
cert_by_type = {c.cert_type: c for c in w.certificates.all()}
|
||
cert_cells = []
|
||
for code, _label in cert_types:
|
||
c = cert_by_type.get(code)
|
||
if not c:
|
||
cert_cells.append('') # doesn't hold it
|
||
elif c.valid_until is None:
|
||
cert_cells.append('Yes (no expiry)') # has it, no expiry
|
||
else:
|
||
cert_cells.append(c.valid_until.strftime('%Y-%m-%d'))
|
||
|
||
# --- Warnings summary ---
|
||
warnings = list(w.warnings.all()) # already ordered -date
|
||
last_warning = warnings[0] if warnings else None
|
||
|
||
# --- Projects & teams worked (distinct names) ---
|
||
project_names = sorted({
|
||
log.project.name for log in w.work_logs.all() if log.project
|
||
})
|
||
team_names = sorted({t.name for t in w.teams.all()})
|
||
|
||
writer.writerow([
|
||
# Identity & pay
|
||
w.name,
|
||
w.id_number,
|
||
w.phone_number,
|
||
f'{w.monthly_salary:.2f}',
|
||
f'{w.daily_rate:.2f}',
|
||
# Banking & tax
|
||
w.tax_number,
|
||
w.uif_number,
|
||
w.bank_name,
|
||
w.bank_account_number,
|
||
# Employment & notes
|
||
w.employment_date.strftime('%Y-%m-%d') if w.employment_date else '',
|
||
'Yes' if w.active else 'No',
|
||
w.notes,
|
||
# PPE sizing
|
||
w.shoe_size,
|
||
w.overall_top_size,
|
||
w.pants_size,
|
||
w.tshirt_size,
|
||
# Driver's License
|
||
'Yes' if w.has_drivers_license else 'No',
|
||
w.drivers_license_code,
|
||
# Certifications (one per cert type)
|
||
*cert_cells,
|
||
# Warnings
|
||
len(warnings),
|
||
last_warning.date.strftime('%Y-%m-%d') if last_warning else '',
|
||
last_warning.get_severity_display() if last_warning else '',
|
||
# Activity aggregates
|
||
w._days_worked or 0,
|
||
'; '.join(project_names),
|
||
'; '.join(team_names),
|
||
w._first_payslip.strftime('%Y-%m-%d') if w._first_payslip else '',
|
||
w._last_payslip.strftime('%Y-%m-%d') if w._last_payslip else '',
|
||
w._payslip_count or 0,
|
||
f'{(w._total_paid or 0):.2f}',
|
||
])
|
||
|
||
return response
|
||
|
||
|
||
# =============================================================
|
||
# === WORKER MANAGEMENT (friendly UI — alternative to /admin/) ===
|
||
# =============================================================
|
||
|
||
@login_required
|
||
def worker_list(request):
|
||
"""Admin-friendly list of all workers with search + status filter.
|
||
|
||
Query params:
|
||
?q=search_term — search name / ID number / phone
|
||
?status=active — default, only active workers
|
||
?status=inactive — only inactive
|
||
?status=all — both
|
||
"""
|
||
if not is_admin(request.user):
|
||
return HttpResponseForbidden("Admin access required.")
|
||
|
||
q = (request.GET.get('q') or '').strip()
|
||
status = request.GET.get('status') or 'active'
|
||
|
||
workers = Worker.objects.all()
|
||
if status == 'active':
|
||
workers = workers.filter(active=True)
|
||
elif status == 'inactive':
|
||
workers = workers.filter(active=False)
|
||
# 'all' → no filter
|
||
|
||
if q:
|
||
workers = workers.filter(
|
||
Q(name__icontains=q) | Q(id_number__icontains=q) | Q(phone_number__icontains=q)
|
||
)
|
||
|
||
# Annotate days worked (distinct WorkLog dates) — shown in the table
|
||
workers = workers.annotate(
|
||
days_worked=Count('work_logs__date', distinct=True),
|
||
).order_by('name')
|
||
|
||
context = {
|
||
'workers': workers,
|
||
'q': q,
|
||
'status': status,
|
||
'total_count': workers.count(),
|
||
}
|
||
return render(request, 'core/workers/list.html', context)
|
||
|
||
|
||
@login_required
|
||
def worker_detail(request, worker_id):
|
||
"""Read-only worker profile with certs, warnings, and history tabs."""
|
||
if not is_admin(request.user):
|
||
return HttpResponseForbidden("Admin access required.")
|
||
|
||
worker = get_object_or_404(Worker, id=worker_id)
|
||
|
||
# --- History aggregates ---
|
||
projects_worked = (
|
||
Project.objects.filter(work_logs__workers=worker).distinct().order_by('name')
|
||
)
|
||
days_worked = worker.work_logs.values('date').distinct().count()
|
||
payslips = worker.payroll_records.order_by('-date')[:10]
|
||
first_payslip = worker.payroll_records.order_by('date').first()
|
||
last_payslip = worker.payroll_records.order_by('-date').first()
|
||
total_paid = worker.payroll_records.aggregate(t=Sum('amount_paid'))['t'] or Decimal('0.00')
|
||
|
||
# --- Certs grouped by status for the visual badges ---
|
||
certs = worker.certificates.all().order_by('cert_type')
|
||
|
||
# --- Warnings (already ordered -date in Meta) ---
|
||
warnings = worker.warnings.all()
|
||
|
||
# --- Active loans / advances ---
|
||
active_loans = worker.loans.filter(active=True).order_by('-date')
|
||
|
||
context = {
|
||
'worker': worker,
|
||
'projects_worked': projects_worked,
|
||
'days_worked': days_worked,
|
||
'payslips': payslips,
|
||
'first_payslip': first_payslip,
|
||
'last_payslip': last_payslip,
|
||
'total_paid': total_paid,
|
||
'certs': certs,
|
||
'warnings': warnings,
|
||
'active_loans': active_loans,
|
||
}
|
||
return render(request, 'core/workers/detail.html', context)
|
||
|
||
|
||
@login_required
|
||
def worker_edit(request, worker_id=None):
|
||
"""Create or edit a Worker plus their certs and warnings in one page.
|
||
|
||
- GET /workers/new/ → blank form
|
||
- GET /workers/<id>/edit/ → form pre-filled
|
||
- POST to either URL → validate + save + redirect to worker_detail
|
||
"""
|
||
if not is_admin(request.user):
|
||
return HttpResponseForbidden("Admin access required.")
|
||
|
||
worker = get_object_or_404(Worker, id=worker_id) if worker_id else None
|
||
is_new = worker is None
|
||
|
||
if request.method == 'POST':
|
||
form = WorkerForm(request.POST, request.FILES, instance=worker)
|
||
# Inline formsets need the parent instance bound early so they
|
||
# can scope queryset + handle created rows correctly.
|
||
if form.is_valid():
|
||
saved_worker = form.save()
|
||
cert_fs = WorkerCertificateFormSet(
|
||
request.POST, request.FILES, instance=saved_worker,
|
||
)
|
||
warn_fs = WorkerWarningFormSet(
|
||
request.POST, request.FILES, instance=saved_worker,
|
||
)
|
||
if cert_fs.is_valid() and warn_fs.is_valid():
|
||
cert_fs.save()
|
||
# Warnings save — set issued_by to current admin on new rows
|
||
warnings = warn_fs.save(commit=False)
|
||
for obj in warnings:
|
||
if obj.issued_by_id is None:
|
||
obj.issued_by = request.user
|
||
obj.save()
|
||
for obj in warn_fs.deleted_objects:
|
||
obj.delete()
|
||
action = 'added' if is_new else 'updated'
|
||
messages.success(request, f'Worker "{saved_worker.name}" {action} successfully.')
|
||
return redirect('worker_detail', worker_id=saved_worker.id)
|
||
# Re-bind forms with errors for re-render
|
||
form = WorkerForm(request.POST, request.FILES, instance=saved_worker)
|
||
else:
|
||
cert_fs = WorkerCertificateFormSet(request.POST, request.FILES, instance=worker)
|
||
warn_fs = WorkerWarningFormSet(request.POST, request.FILES, instance=worker)
|
||
else:
|
||
form = WorkerForm(instance=worker)
|
||
cert_fs = WorkerCertificateFormSet(instance=worker)
|
||
warn_fs = WorkerWarningFormSet(instance=worker)
|
||
|
||
context = {
|
||
'form': form,
|
||
'cert_formset': cert_fs,
|
||
'warn_formset': warn_fs,
|
||
'worker': worker,
|
||
'is_new': is_new,
|
||
}
|
||
return render(request, 'core/workers/edit.html', context)
|
||
|
||
|
||
# =============================================================
|
||
# === WORKER BATCH REPORT ===
|
||
# =============================================================
|
||
|
||
def _build_worker_report_context(status=None, project_id=None, team_id=None):
|
||
"""Build the per-worker aggregation list used by HTML / CSV / PDF views.
|
||
|
||
Returns a list of dicts — one per worker — with projects, teams,
|
||
days worked, first/last payslip dates, total paid, cert counts,
|
||
and warning counts. All aggregates are computed in a single query
|
||
via annotate/prefetch to avoid N+1 database hits.
|
||
"""
|
||
workers = Worker.objects.all()
|
||
if status == 'active':
|
||
workers = workers.filter(active=True)
|
||
elif status == 'inactive':
|
||
workers = workers.filter(active=False)
|
||
|
||
if project_id:
|
||
workers = workers.filter(work_logs__project_id=project_id).distinct()
|
||
if team_id:
|
||
workers = workers.filter(teams__id=team_id).distinct()
|
||
|
||
workers = workers.annotate(
|
||
_days_worked=Count('work_logs__date', distinct=True),
|
||
_first_payslip_date=Min('payroll_records__date'),
|
||
_last_payslip_date=Max('payroll_records__date'),
|
||
_total_paid_lifetime=Sum('payroll_records__amount_paid'),
|
||
_payslip_count=Count('payroll_records', distinct=True),
|
||
_active_warnings=Count('warnings', distinct=True),
|
||
).order_by('name')
|
||
|
||
today = datetime.date.today()
|
||
thirty_days_out = today + datetime.timedelta(days=30)
|
||
|
||
rows = []
|
||
for w in workers:
|
||
projects = list(
|
||
Project.objects.filter(work_logs__workers=w).distinct().values_list('name', flat=True)
|
||
)
|
||
teams = list(w.teams.values_list('name', flat=True))
|
||
|
||
certs = w.certificates.all()
|
||
certs_total = certs.count()
|
||
certs_active = 0
|
||
certs_expiring = 0
|
||
certs_expired = 0
|
||
for c in certs:
|
||
if c.valid_until is None:
|
||
certs_active += 1 # non-expiring counts as active
|
||
elif c.valid_until < today:
|
||
certs_expired += 1
|
||
elif c.valid_until <= thirty_days_out:
|
||
certs_expiring += 1
|
||
certs_active += 1
|
||
else:
|
||
certs_active += 1
|
||
|
||
rows.append({
|
||
'worker': w,
|
||
'projects': projects,
|
||
'teams': teams,
|
||
'days_worked': w._days_worked or 0,
|
||
'first_payslip_date': w._first_payslip_date,
|
||
'last_payslip_date': w._last_payslip_date,
|
||
'total_paid_lifetime': w._total_paid_lifetime or Decimal('0.00'),
|
||
'payslip_count': w._payslip_count or 0,
|
||
'certs_total': certs_total,
|
||
'certs_active': certs_active,
|
||
'certs_expiring': certs_expiring,
|
||
'certs_expired': certs_expired,
|
||
'warnings_count': w._active_warnings or 0,
|
||
})
|
||
return rows
|
||
|
||
|
||
@login_required
|
||
def worker_batch_report(request):
|
||
"""HTML table of every worker with aggregated project/team/day/payslip history."""
|
||
if not is_admin(request.user):
|
||
return HttpResponseForbidden("Admin access required.")
|
||
|
||
status = request.GET.get('status') or 'all'
|
||
project_id = request.GET.get('project') or None
|
||
team_id = request.GET.get('team') or None
|
||
|
||
rows = _build_worker_report_context(status=status, project_id=project_id, team_id=team_id)
|
||
|
||
context = {
|
||
'rows': rows,
|
||
'status': status,
|
||
'project_id': project_id,
|
||
'team_id': team_id,
|
||
'projects': Project.objects.all().order_by('name'),
|
||
'teams': Team.objects.all().order_by('name'),
|
||
'query_string': request.GET.urlencode(),
|
||
'total_workers': len(rows),
|
||
}
|
||
return render(request, 'core/workers/batch_report.html', context)
|
||
|
||
|
||
@login_required
|
||
def worker_batch_report_csv(request):
|
||
"""CSV download of the batch worker report."""
|
||
if not is_admin(request.user):
|
||
return HttpResponseForbidden("Admin access required.")
|
||
|
||
status = request.GET.get('status') or 'all'
|
||
project_id = request.GET.get('project') or None
|
||
team_id = request.GET.get('team') or None
|
||
|
||
rows = _build_worker_report_context(status=status, project_id=project_id, team_id=team_id)
|
||
|
||
response = HttpResponse(content_type='text/csv')
|
||
response['Content-Disposition'] = 'attachment; filename="worker_batch_report.csv"'
|
||
writer = csv.writer(response)
|
||
writer.writerow([
|
||
'Name', 'ID Number', 'Monthly Salary', 'Active', 'Days Worked',
|
||
'Projects', 'Teams', 'First Payslip', 'Last Payslip', 'Payslip Count',
|
||
'Total Paid Lifetime', 'Certs (Active/Total)', 'Warnings',
|
||
])
|
||
for r in rows:
|
||
w = r['worker']
|
||
writer.writerow([
|
||
w.name, w.id_number, f'{w.monthly_salary:.2f}',
|
||
'Yes' if w.active else 'No', r['days_worked'],
|
||
'; '.join(r['projects']),
|
||
'; '.join(r['teams']),
|
||
r['first_payslip_date'].strftime('%Y-%m-%d') if r['first_payslip_date'] else '',
|
||
r['last_payslip_date'].strftime('%Y-%m-%d') if r['last_payslip_date'] else '',
|
||
r['payslip_count'],
|
||
f'{r["total_paid_lifetime"]:.2f}',
|
||
f'{r["certs_active"]}/{r["certs_total"]}',
|
||
r['warnings_count'],
|
||
])
|
||
return response
|
||
|
||
|
||
@login_required
|
||
def worker_batch_report_pdf(request):
|
||
"""PDF version of the batch worker report."""
|
||
if not is_admin(request.user):
|
||
return HttpResponseForbidden("Admin access required.")
|
||
|
||
from .utils import render_to_pdf
|
||
|
||
status = request.GET.get('status') or 'all'
|
||
project_id = request.GET.get('project') or None
|
||
team_id = request.GET.get('team') or None
|
||
|
||
rows = _build_worker_report_context(status=status, project_id=project_id, team_id=team_id)
|
||
|
||
context = {
|
||
'rows': rows,
|
||
'status': status,
|
||
'project_name': (
|
||
Project.objects.get(id=project_id).name if project_id else 'All Projects'
|
||
),
|
||
'team_name': Team.objects.get(id=team_id).name if team_id else 'All Teams',
|
||
'now': timezone.now(),
|
||
'total_workers': len(rows),
|
||
}
|
||
pdf = render_to_pdf('core/pdf/workers_report_pdf.html', context)
|
||
if pdf:
|
||
response = HttpResponse(pdf, content_type='application/pdf')
|
||
response['Content-Disposition'] = 'attachment; filename="worker_batch_report.pdf"'
|
||
return response
|
||
messages.error(request, "PDF generation failed.")
|
||
return redirect('worker_batch_report')
|
||
|
||
|
||
# =============================================================
|
||
# === TEAM MANAGEMENT (friendly UI — alternative to /admin/) ===
|
||
# =============================================================
|
||
|
||
@login_required
|
||
def team_list(request):
|
||
"""Admin-friendly list of all teams with search + status filter."""
|
||
if not is_admin(request.user):
|
||
return HttpResponseForbidden("Admin access required.")
|
||
|
||
q = (request.GET.get('q') or '').strip()
|
||
status = request.GET.get('status') or 'active'
|
||
|
||
teams = Team.objects.all().select_related('supervisor')
|
||
if status == 'active':
|
||
teams = teams.filter(active=True)
|
||
elif status == 'inactive':
|
||
teams = teams.filter(active=False)
|
||
|
||
if q:
|
||
teams = teams.filter(name__icontains=q)
|
||
|
||
# Annotate counts for the list table (Django templates can't access
|
||
# attributes starting with underscore, so use a plain name).
|
||
teams = teams.annotate(
|
||
workers_count=Count('workers', distinct=True),
|
||
).order_by('name')
|
||
|
||
context = {
|
||
'teams': teams,
|
||
'q': q,
|
||
'status': status,
|
||
'total_count': teams.count(),
|
||
}
|
||
return render(request, 'core/teams/list.html', context)
|
||
|
||
|
||
@login_required
|
||
def team_detail(request, team_id):
|
||
"""Read-only team profile with pay schedule, workers, and history tabs."""
|
||
if not is_admin(request.user):
|
||
return HttpResponseForbidden("Admin access required.")
|
||
|
||
team = get_object_or_404(Team.objects.select_related('supervisor'), id=team_id)
|
||
|
||
# --- Workers (all, including inactive — flagged via template) ---
|
||
workers = team.workers.all().order_by('-active', 'name')
|
||
|
||
# --- Work history aggregates ---
|
||
work_logs = team.work_logs.select_related('project').prefetch_related('workers').order_by('-date')
|
||
days_worked = work_logs.values('date').distinct().count()
|
||
projects_worked = (
|
||
Project.objects.filter(work_logs__team=team).distinct().order_by('name')
|
||
)
|
||
recent_logs = work_logs[:10]
|
||
|
||
# --- Labour cost for this team (lifetime) using the existing helper ---
|
||
cost_breakdown = _get_labour_costs(work_logs, 'project__name', 'project')
|
||
total_labour_cost = sum((r['total'] for r in cost_breakdown), Decimal('0.00'))
|
||
|
||
# --- Pay schedule preview: current + next 2 periods (3 total) ---
|
||
pay_periods = []
|
||
if team.pay_frequency and team.pay_start_date:
|
||
today = datetime.date.today()
|
||
current = get_pay_period(team, today)
|
||
if current:
|
||
pay_periods.append(current)
|
||
next_ref = current[1] + datetime.timedelta(days=1)
|
||
for _ in range(2):
|
||
p = get_pay_period(team, next_ref)
|
||
if not p:
|
||
break
|
||
pay_periods.append(p)
|
||
next_ref = p[1] + datetime.timedelta(days=1)
|
||
|
||
context = {
|
||
'team': team,
|
||
'workers': workers,
|
||
'days_worked': days_worked,
|
||
'projects_worked': projects_worked,
|
||
'recent_logs': recent_logs,
|
||
'cost_breakdown': cost_breakdown,
|
||
'total_labour_cost': total_labour_cost,
|
||
'pay_periods': pay_periods,
|
||
}
|
||
return render(request, 'core/teams/detail.html', context)
|
||
|
||
|
||
@login_required
|
||
def team_edit(request, team_id=None):
|
||
"""Create or edit a Team."""
|
||
if not is_admin(request.user):
|
||
return HttpResponseForbidden("Admin access required.")
|
||
|
||
team = get_object_or_404(Team, id=team_id) if team_id else None
|
||
is_new = team is None
|
||
|
||
if request.method == 'POST':
|
||
form = TeamForm(request.POST, instance=team)
|
||
if form.is_valid():
|
||
saved = form.save()
|
||
action = 'added' if is_new else 'updated'
|
||
messages.success(request, f'Team "{saved.name}" {action} successfully.')
|
||
return redirect('team_detail', team_id=saved.id)
|
||
else:
|
||
form = TeamForm(instance=team)
|
||
|
||
context = {
|
||
'form': form,
|
||
'team': team,
|
||
'is_new': is_new,
|
||
}
|
||
return render(request, 'core/teams/edit.html', context)
|
||
|
||
|
||
def _build_team_report_context(status=None):
|
||
"""Build the per-team aggregation list used by HTML + CSV views."""
|
||
teams = Team.objects.all().select_related('supervisor')
|
||
if status == 'active':
|
||
teams = teams.filter(active=True)
|
||
elif status == 'inactive':
|
||
teams = teams.filter(active=False)
|
||
|
||
teams = teams.annotate(
|
||
_worker_count=Count('workers', distinct=True),
|
||
_days_worked=Count('work_logs__date', distinct=True),
|
||
).order_by('name')
|
||
|
||
rows = []
|
||
for t in teams:
|
||
projects = list(
|
||
Project.objects.filter(work_logs__team=t).distinct().values_list('name', flat=True)
|
||
)
|
||
cost_breakdown = _get_labour_costs(t.work_logs.all(), 'project__name', 'project')
|
||
total_cost = sum((r['total'] for r in cost_breakdown), Decimal('0.00'))
|
||
rows.append({
|
||
'team': t,
|
||
'worker_count': t._worker_count or 0,
|
||
'days_worked': t._days_worked or 0,
|
||
'projects': projects,
|
||
'total_labour_cost': total_cost,
|
||
})
|
||
return rows
|
||
|
||
|
||
@login_required
|
||
def team_batch_report(request):
|
||
"""HTML table of every team with aggregated stats."""
|
||
if not is_admin(request.user):
|
||
return HttpResponseForbidden("Admin access required.")
|
||
|
||
status = request.GET.get('status') or 'all'
|
||
rows = _build_team_report_context(status=status)
|
||
|
||
context = {
|
||
'rows': rows,
|
||
'status': status,
|
||
'query_string': request.GET.urlencode(),
|
||
'total_teams': len(rows),
|
||
}
|
||
return render(request, 'core/teams/batch_report.html', context)
|
||
|
||
|
||
@login_required
|
||
def team_batch_report_csv(request):
|
||
"""CSV download of the batch team report."""
|
||
if not is_admin(request.user):
|
||
return HttpResponseForbidden("Admin access required.")
|
||
|
||
status = request.GET.get('status') or 'all'
|
||
rows = _build_team_report_context(status=status)
|
||
|
||
response = HttpResponse(content_type='text/csv')
|
||
response['Content-Disposition'] = 'attachment; filename="team_batch_report.csv"'
|
||
writer = csv.writer(response)
|
||
writer.writerow([
|
||
'Team Name', 'Supervisor', 'Active', 'Pay Frequency', 'Pay Start Date',
|
||
'Worker Count', 'Days Worked', 'Projects Worked On', 'Total Labour Cost',
|
||
])
|
||
for r in rows:
|
||
t = r['team']
|
||
writer.writerow([
|
||
t.name,
|
||
t.supervisor.username if t.supervisor else '',
|
||
'Yes' if t.active else 'No',
|
||
t.get_pay_frequency_display() if t.pay_frequency else '',
|
||
t.pay_start_date.strftime('%Y-%m-%d') if t.pay_start_date else '',
|
||
r['worker_count'],
|
||
r['days_worked'],
|
||
'; '.join(r['projects']),
|
||
f'{r["total_labour_cost"]:.2f}',
|
||
])
|
||
return response
|
||
|
||
|
||
# =============================================================
|
||
# === PROJECT MANAGEMENT (friendly UI — alternative to /admin/) ===
|
||
# =============================================================
|
||
|
||
@login_required
|
||
def project_list(request):
|
||
"""Admin-friendly list of all projects with search + status filter."""
|
||
if not is_admin(request.user):
|
||
return HttpResponseForbidden("Admin access required.")
|
||
|
||
q = (request.GET.get('q') or '').strip()
|
||
status = request.GET.get('status') or 'active'
|
||
|
||
projects = Project.objects.all().prefetch_related('supervisors')
|
||
if status == 'active':
|
||
projects = projects.filter(active=True)
|
||
elif status == 'inactive':
|
||
projects = projects.filter(active=False)
|
||
|
||
if q:
|
||
projects = projects.filter(
|
||
Q(name__icontains=q) | Q(description__icontains=q)
|
||
)
|
||
|
||
projects = projects.annotate(
|
||
workers_count=Count('work_logs__workers', distinct=True),
|
||
).order_by('name')
|
||
|
||
context = {
|
||
'projects': projects,
|
||
'q': q,
|
||
'status': status,
|
||
'total_count': projects.count(),
|
||
}
|
||
return render(request, 'core/projects/list.html', context)
|
||
|
||
|
||
@login_required
|
||
def project_detail(request, project_id):
|
||
"""Read-only project profile with supervisors, teams, workers, history tabs."""
|
||
if not is_admin(request.user):
|
||
return HttpResponseForbidden("Admin access required.")
|
||
|
||
project = get_object_or_404(
|
||
Project.objects.prefetch_related('supervisors'),
|
||
id=project_id,
|
||
)
|
||
|
||
# --- Teams that have worked on this project ---
|
||
teams_worked = (
|
||
Team.objects.filter(work_logs__project=project).distinct().order_by('name')
|
||
)
|
||
|
||
# --- Workers who have worked on this project ---
|
||
workers_worked = (
|
||
Worker.objects.filter(work_logs__project=project).distinct().order_by('name')
|
||
)
|
||
|
||
# --- Work logs for history tab ---
|
||
work_logs = project.work_logs.prefetch_related('workers', 'team').order_by('-date')
|
||
days_worked = work_logs.values('date').distinct().count()
|
||
recent_logs = work_logs[:10]
|
||
|
||
# --- Activity date range ---
|
||
date_range = work_logs.aggregate(first=Min('date'), last=Max('date'))
|
||
|
||
# --- Labour cost for this project (lifetime) ---
|
||
cost_breakdown = _get_labour_costs(work_logs, 'team__name', 'team')
|
||
total_labour_cost = sum((r['total'] for r in cost_breakdown), Decimal('0.00'))
|
||
|
||
context = {
|
||
'project': project,
|
||
'teams_worked': teams_worked,
|
||
'workers_worked': workers_worked,
|
||
'days_worked': days_worked,
|
||
'recent_logs': recent_logs,
|
||
'first_activity': date_range.get('first'),
|
||
'last_activity': date_range.get('last'),
|
||
'cost_breakdown': cost_breakdown,
|
||
'total_labour_cost': total_labour_cost,
|
||
}
|
||
return render(request, 'core/projects/detail.html', context)
|
||
|
||
|
||
@login_required
|
||
def project_edit(request, project_id=None):
|
||
"""Create or edit a Project."""
|
||
if not is_admin(request.user):
|
||
return HttpResponseForbidden("Admin access required.")
|
||
|
||
project = get_object_or_404(Project, id=project_id) if project_id else None
|
||
is_new = project is None
|
||
|
||
if request.method == 'POST':
|
||
form = ProjectForm(request.POST, instance=project)
|
||
if form.is_valid():
|
||
saved = form.save()
|
||
action = 'added' if is_new else 'updated'
|
||
messages.success(request, f'Project "{saved.name}" {action} successfully.')
|
||
return redirect('project_detail', project_id=saved.id)
|
||
else:
|
||
form = ProjectForm(instance=project)
|
||
|
||
context = {
|
||
'form': form,
|
||
'project': project,
|
||
'is_new': is_new,
|
||
}
|
||
return render(request, 'core/projects/edit.html', context)
|
||
|
||
|
||
def _build_project_report_context(status=None):
|
||
"""Build per-project aggregation list used by HTML + CSV views."""
|
||
projects = Project.objects.all().prefetch_related('supervisors')
|
||
if status == 'active':
|
||
projects = projects.filter(active=True)
|
||
elif status == 'inactive':
|
||
projects = projects.filter(active=False)
|
||
|
||
projects = projects.annotate(
|
||
_worker_days=Count('work_logs__workers', distinct=False),
|
||
_distinct_workers=Count('work_logs__workers', distinct=True),
|
||
_first_date=Min('work_logs__date'),
|
||
_last_date=Max('work_logs__date'),
|
||
).order_by('name')
|
||
|
||
rows = []
|
||
for p in projects:
|
||
teams = list(
|
||
Team.objects.filter(work_logs__project=p).distinct().values_list('name', flat=True)
|
||
)
|
||
supervisors = list(p.supervisors.values_list('username', flat=True))
|
||
cost_breakdown = _get_labour_costs(p.work_logs.all(), 'team__name', 'team')
|
||
total_cost = sum((r['total'] for r in cost_breakdown), Decimal('0.00'))
|
||
rows.append({
|
||
'project': p,
|
||
'supervisors': supervisors,
|
||
'teams': teams,
|
||
'worker_days': p._worker_days or 0,
|
||
'distinct_workers': p._distinct_workers or 0,
|
||
'first_date': p._first_date,
|
||
'last_date': p._last_date,
|
||
'total_labour_cost': total_cost,
|
||
})
|
||
return rows
|
||
|
||
|
||
@login_required
|
||
def project_batch_report(request):
|
||
"""HTML table of every project with aggregated stats."""
|
||
if not is_admin(request.user):
|
||
return HttpResponseForbidden("Admin access required.")
|
||
|
||
status = request.GET.get('status') or 'all'
|
||
rows = _build_project_report_context(status=status)
|
||
|
||
context = {
|
||
'rows': rows,
|
||
'status': status,
|
||
'query_string': request.GET.urlencode(),
|
||
'total_projects': len(rows),
|
||
}
|
||
return render(request, 'core/projects/batch_report.html', context)
|
||
|
||
|
||
@login_required
|
||
def project_batch_report_csv(request):
|
||
"""CSV download of the batch project report."""
|
||
if not is_admin(request.user):
|
||
return HttpResponseForbidden("Admin access required.")
|
||
|
||
status = request.GET.get('status') or 'all'
|
||
rows = _build_project_report_context(status=status)
|
||
|
||
response = HttpResponse(content_type='text/csv')
|
||
response['Content-Disposition'] = 'attachment; filename="project_batch_report.csv"'
|
||
writer = csv.writer(response)
|
||
writer.writerow([
|
||
'Project Name', 'Active', 'Start Date', 'End Date',
|
||
'Supervisors', 'Teams Worked', 'Distinct Workers', 'Worker-Days',
|
||
'First Activity', 'Last Activity', 'Total Labour Cost',
|
||
])
|
||
for r in rows:
|
||
p = r['project']
|
||
writer.writerow([
|
||
p.name,
|
||
'Yes' if p.active else 'No',
|
||
p.start_date.strftime('%Y-%m-%d') if p.start_date else '',
|
||
p.end_date.strftime('%Y-%m-%d') if p.end_date else '',
|
||
'; '.join(r['supervisors']),
|
||
'; '.join(r['teams']),
|
||
r['distinct_workers'],
|
||
r['worker_days'],
|
||
r['first_date'].strftime('%Y-%m-%d') if r['first_date'] else '',
|
||
r['last_date'].strftime('%Y-%m-%d') if r['last_date'] else '',
|
||
f'{r["total_labour_cost"]:.2f}',
|
||
])
|
||
return response
|
||
|
||
|
||
# === REPORT GENERATION ===
|
||
# Builds a comprehensive payroll report for a given date range.
|
||
# Used by both the on-screen HTML report and the PDF download.
|
||
#
|
||
# TERMINOLOGY (used consistently throughout report):
|
||
# - "Worker-Days" = total individual worker×day entries (if 5 workers work 22 days = 110 worker-days)
|
||
# - "Days Worked" (per worker) = distinct dates that specific worker was logged
|
||
# - "Total Paid" = actual money transferred to a worker (net of all adjustments)
|
||
# - "Loans Outstanding" = current remaining balance on active loans
|
||
# - "Advances Outstanding" = current remaining balance on active advances
|
||
|
||
# === REPORT LABEL MAP ===
|
||
# Maps internal PayrollAdjustment type names to human-readable report labels.
|
||
# These are used in both the Adjustment Summary and Worker Breakdown tables.
|
||
REPORT_ADJ_LABELS = {
|
||
'Bonus': 'Bonuses',
|
||
'Overtime': 'Overtime',
|
||
'Deduction': 'Deductions',
|
||
'Loan Repayment': 'Loan Repaid',
|
||
'Advance Repayment': 'Advance Repaid',
|
||
'New Loan': 'Loans Issued',
|
||
'Advance Payment': 'Advances Issued',
|
||
}
|
||
|
||
|
||
def _get_labour_costs(work_logs_qs, group_by_field, name_key):
|
||
"""
|
||
Calculate labour cost (sum of daily rates) grouped by a field.
|
||
Used for project and team cost breakdowns.
|
||
|
||
Args:
|
||
work_logs_qs: Filtered WorkLog queryset
|
||
group_by_field: Field to group by (e.g. 'project__name', 'team__name')
|
||
name_key: Key name for the result dict (e.g. 'project', 'team')
|
||
|
||
Returns list of dicts: [{name_key: ..., 'worker_days': ..., 'total': ...}]
|
||
"""
|
||
data = list(
|
||
work_logs_qs
|
||
.values(group_by_field)
|
||
.annotate(
|
||
worker_days=Count('workers'),
|
||
labour_cost=Sum(F('workers__monthly_salary') / Decimal('20'))
|
||
)
|
||
.filter(worker_days__gt=0)
|
||
.order_by('-labour_cost')
|
||
)
|
||
return [
|
||
{
|
||
name_key: item[group_by_field] or 'Unknown',
|
||
'worker_days': item['worker_days'],
|
||
'total': item['labour_cost'] or Decimal('0.00'),
|
||
}
|
||
for item in data
|
||
]
|
||
|
||
|
||
def _build_report_context(start_date, end_date, project_ids=None, team_ids=None):
|
||
"""
|
||
Compute all report data for the given date range and filters.
|
||
Returns a dictionary of totals, breakdowns, and worker-level data.
|
||
|
||
project_ids / team_ids are lists of ints (from request.GET.getlist).
|
||
None or [] are treated as "no filter" — returning data for every project
|
||
or every team respectively. A single-element list like [3] reproduces
|
||
the old single-id behaviour (so old URLs like ?project=3 still work).
|
||
|
||
Key design decision: "Worker-Days" counts total worker×log entries
|
||
(not distinct calendar dates). This correlates correctly with cost —
|
||
if 5 workers work 22 days, that's 110 worker-days, and
|
||
cost / worker-days ≈ average daily rate.
|
||
"""
|
||
# --- Base filters ---
|
||
date_filter = Q(date__gte=start_date, date__lte=end_date)
|
||
|
||
# --- PayrollRecords in range ---
|
||
#
|
||
# IMPORTANT — avoid M2M double-JOIN inflation:
|
||
# Chaining `.filter(work_logs__project_id__in=X).distinct().filter(work_logs__team_id__in=Y)`
|
||
# creates TWO separate JOIN aliases on core_payrollrecord_work_logs. Any
|
||
# later `.values().annotate(Sum())` then aggregates across the cartesian
|
||
# product of matching rows, inflating per-worker and per-date totals by
|
||
# N × M (where N and M are the counts of matching logs per record).
|
||
# `.aggregate(Sum())` is safe because Django wraps it in a subquery when
|
||
# distinct() is in play, but `.values().annotate(Sum())` isn't — so we
|
||
# use id__in subqueries to keep the outer queryset JOIN-free.
|
||
# See ReportContextFilterInflationTests for regression coverage.
|
||
records = PayrollRecord.objects.filter(date_filter)
|
||
if project_ids:
|
||
records = records.filter(
|
||
id__in=PayrollRecord.objects.filter(
|
||
work_logs__project_id__in=project_ids
|
||
).values('id')
|
||
)
|
||
if team_ids:
|
||
records = records.filter(
|
||
id__in=PayrollRecord.objects.filter(
|
||
work_logs__team_id__in=team_ids
|
||
).values('id')
|
||
)
|
||
|
||
# --- Total Paid Out (sum of all payments made) ---
|
||
total_paid_out = records.aggregate(total=Sum('amount_paid'))['total'] or Decimal('0.00')
|
||
|
||
# --- Payments by Date (total paid per day) ---
|
||
payments_by_date = (
|
||
records.values('date')
|
||
.annotate(total=Sum('amount_paid'))
|
||
.order_by('date')
|
||
)
|
||
|
||
# --- Adjustments in range ---
|
||
# project_ids filters via an FK column (no JOIN inflation risk), but
|
||
# team_ids goes through worker__teams M2M — apply the same subquery
|
||
# pattern as above to keep adj_by_type's values().annotate(Sum()) safe.
|
||
adjustments = PayrollAdjustment.objects.filter(date_filter)
|
||
if project_ids:
|
||
adjustments = adjustments.filter(project_id__in=project_ids)
|
||
if team_ids:
|
||
adjustments = adjustments.filter(
|
||
worker__in=Worker.objects.filter(teams__id__in=team_ids).values('id')
|
||
)
|
||
|
||
# --- Work Logs in range (for calculating actual labour cost) ---
|
||
work_logs_qs = WorkLog.objects.filter(date__gte=start_date, date__lte=end_date)
|
||
if project_ids:
|
||
work_logs_qs = work_logs_qs.filter(project_id__in=project_ids)
|
||
if team_ids:
|
||
work_logs_qs = work_logs_qs.filter(team_id__in=team_ids)
|
||
|
||
# Total worker-days across all work logs (counts M2M worker entries)
|
||
total_worker_days = work_logs_qs.aggregate(
|
||
total=Count('workers'))['total'] or 0
|
||
|
||
# --- Labour Cost by Project (selected period) ---
|
||
# Uses daily rates (monthly_salary / 20) for the TRUE cost per project
|
||
cost_per_project = _get_labour_costs(work_logs_qs, 'project__name', 'project')
|
||
|
||
# --- Labour Cost by Team (selected period) ---
|
||
cost_per_team = _get_labour_costs(
|
||
work_logs_qs.filter(team__isnull=False), 'team__name', 'team'
|
||
)
|
||
|
||
# --- ALL TIME: project and team costs since the very first work log ---
|
||
all_time_logs = WorkLog.objects.all()
|
||
if project_ids:
|
||
all_time_logs = all_time_logs.filter(project_id__in=project_ids)
|
||
if team_ids:
|
||
all_time_logs = all_time_logs.filter(team_id__in=team_ids)
|
||
# === CHAPTER I — All Time Projects (enriched) ===
|
||
# Adds working_days and avg_per_working_day (the 2026-04-23 design).
|
||
# Can't just extend _get_labour_costs because that helper is used by
|
||
# other sections with different columns. Wrap it here instead.
|
||
alltime_projects_raw = _get_labour_costs(all_time_logs, 'project__name', 'project')
|
||
# Build a lookup of working_days per project (distinct work-log dates)
|
||
project_working_days = dict(
|
||
all_time_logs.filter(project__isnull=False)
|
||
.values('project_id', 'project__name')
|
||
.annotate(days=Count('date', distinct=True))
|
||
.values_list('project__name', 'days')
|
||
)
|
||
# Lookup project start_date from the Project model (authoritative source)
|
||
start_dates = dict(
|
||
Project.objects.values_list('name', 'start_date')
|
||
)
|
||
# Lookup the most recent WorkLog.date for each project (for the new
|
||
# "Last Activity" column — helps Konrad spot which projects are dormant
|
||
# without having to scroll through date pickers).
|
||
last_activity = dict(
|
||
all_time_logs.filter(project__isnull=False)
|
||
.values('project__name')
|
||
.annotate(last=Max('date'))
|
||
.values_list('project__name', 'last')
|
||
)
|
||
alltime_projects = []
|
||
for row in alltime_projects_raw:
|
||
name = row['project']
|
||
wdays = project_working_days.get(name, 0)
|
||
total = row['total'] or Decimal('0.00')
|
||
avg = (total / wdays).quantize(Decimal('0.01')) if wdays else Decimal('0.00')
|
||
alltime_projects.append({
|
||
'project': name,
|
||
'worker_days': row['worker_days'],
|
||
'total': total,
|
||
'start_date': start_dates.get(name), # may be None
|
||
'last_activity': last_activity.get(name), # may be None
|
||
'working_days': wdays,
|
||
'avg_per_working_day': avg,
|
||
})
|
||
alltime_teams = _get_labour_costs(
|
||
all_time_logs.filter(team__isnull=False), 'team__name', 'team'
|
||
)
|
||
|
||
# --- THIS YEAR: project and team costs for the current calendar year ---
|
||
current_year = timezone.now().year
|
||
year_start = datetime.date(current_year, 1, 1)
|
||
year_end = datetime.date(current_year, 12, 31)
|
||
year_logs = WorkLog.objects.filter(date__gte=year_start, date__lte=year_end)
|
||
if project_ids:
|
||
year_logs = year_logs.filter(project_id__in=project_ids)
|
||
if team_ids:
|
||
year_logs = year_logs.filter(team_id__in=team_ids)
|
||
year_projects = _get_labour_costs(year_logs, 'project__name', 'project')
|
||
year_teams = _get_labour_costs(
|
||
year_logs.filter(team__isnull=False), 'team__name', 'team'
|
||
)
|
||
|
||
# --- Loans & Advances Outstanding (current balances) ---
|
||
# team filter goes through worker__teams (M2M). Use the subquery pattern
|
||
# (CLAUDE.md Django ORM gotcha) so we don't pick up JOIN inflation on the
|
||
# aggregate.
|
||
active_loans = Loan.objects.filter(active=True, date__lte=end_date)
|
||
if team_ids:
|
||
active_loans = active_loans.filter(
|
||
worker__in=Worker.objects.filter(teams__id__in=team_ids).values('id')
|
||
)
|
||
loans_outstanding = active_loans.filter(loan_type='loan').aggregate(
|
||
total=Sum('remaining_balance'))['total'] or Decimal('0.00')
|
||
advances_outstanding = active_loans.filter(loan_type='advance').aggregate(
|
||
total=Sum('remaining_balance'))['total'] or Decimal('0.00')
|
||
|
||
# --- Loans & Advances Issued This Period ---
|
||
loans_issued_qs = Loan.objects.filter(date_filter, loan_type='loan')
|
||
advances_issued_qs = Loan.objects.filter(date_filter, loan_type='advance')
|
||
if team_ids:
|
||
team_worker_ids = Worker.objects.filter(teams__id__in=team_ids).values('id')
|
||
loans_issued_qs = loans_issued_qs.filter(worker__in=team_worker_ids)
|
||
advances_issued_qs = advances_issued_qs.filter(worker__in=team_worker_ids)
|
||
loans_issued = loans_issued_qs.aggregate(
|
||
total=Sum('principal_amount'))['total'] or Decimal('0.00')
|
||
advances_issued = advances_issued_qs.aggregate(
|
||
total=Sum('principal_amount'))['total'] or Decimal('0.00')
|
||
|
||
# --- Adjustment Summary ---
|
||
# Group by type, use readable labels, and sort by logical grouping
|
||
adj_by_type = (
|
||
adjustments.values('type')
|
||
.annotate(total=Sum('amount'))
|
||
.order_by('type')
|
||
)
|
||
adjustment_totals = [
|
||
{
|
||
'type': item['type'],
|
||
'label': REPORT_ADJ_LABELS.get(item['type'], item['type']),
|
||
'total': item['total'],
|
||
}
|
||
for item in adj_by_type
|
||
]
|
||
|
||
# --- Determine which adjustment types appear (for worker table columns) ---
|
||
# Only types with non-zero totals get a column — keeps the table readable
|
||
active_adj_types = list(
|
||
adjustments.values_list('type', flat=True).distinct().order_by('type')
|
||
)
|
||
# Create matching readable labels for column headers
|
||
active_adj_labels = [REPORT_ADJ_LABELS.get(t, t) for t in active_adj_types]
|
||
|
||
# --- Worker Breakdown ---
|
||
# Per worker: days worked, total paid, and each adjustment type
|
||
worker_records = (
|
||
records.values('worker__id', 'worker__name')
|
||
.annotate(total_paid=Sum('amount_paid'))
|
||
.order_by('worker__name')
|
||
)
|
||
|
||
# Days worked per worker = distinct dates they appear in work logs
|
||
days_per_worker = dict(
|
||
work_logs_qs.values('workers__id')
|
||
.annotate(days=Count('date', distinct=True))
|
||
.values_list('workers__id', 'days')
|
||
)
|
||
|
||
worker_breakdown = []
|
||
for wr in worker_records:
|
||
w_adjs = adjustments.filter(worker_id=wr['worker__id'])
|
||
# Per-type amounts for this worker (only for types that exist in the period)
|
||
adj_values = []
|
||
for adj_type in active_adj_types:
|
||
amt = w_adjs.filter(type=adj_type).aggregate(
|
||
t=Sum('amount'))['t'] or Decimal('0.00')
|
||
adj_values.append(amt)
|
||
|
||
worker_breakdown.append({
|
||
'name': wr['worker__name'],
|
||
'total_paid': wr['total_paid'],
|
||
'days': days_per_worker.get(wr['worker__id'], 0),
|
||
'adj_values': adj_values,
|
||
})
|
||
|
||
# === Hero KPI band data (executive report v2) ===
|
||
# Small helpers that power the new hero band at the top of the report.
|
||
# Kept separate so the big return dict stays easy to scan.
|
||
_cv = _company_cost_velocity()
|
||
|
||
return {
|
||
'start_date': start_date,
|
||
'end_date': end_date,
|
||
'project_name': (
|
||
', '.join(
|
||
Project.objects.filter(id__in=project_ids).values_list('name', flat=True)
|
||
)
|
||
if project_ids else 'All Projects'
|
||
),
|
||
'team_name': (
|
||
', '.join(
|
||
Team.objects.filter(id__in=team_ids).values_list('name', flat=True)
|
||
)
|
||
if team_ids else 'All Teams'
|
||
),
|
||
# --- Summary ---
|
||
'total_paid_out': total_paid_out,
|
||
'total_worker_days': total_worker_days,
|
||
'loans_outstanding': loans_outstanding,
|
||
'advances_outstanding': advances_outstanding,
|
||
'loans_issued': loans_issued,
|
||
'advances_issued': advances_issued,
|
||
# --- All Time & Year context ---
|
||
'alltime_projects': alltime_projects,
|
||
'alltime_teams': alltime_teams,
|
||
'current_year': current_year,
|
||
'year_projects': year_projects,
|
||
'year_teams': year_teams,
|
||
# --- Selected Period tables ---
|
||
'payments_by_date': payments_by_date,
|
||
'cost_per_project': cost_per_project,
|
||
'cost_per_team': cost_per_team,
|
||
'adjustment_totals': adjustment_totals,
|
||
'active_adj_types': active_adj_types,
|
||
'active_adj_labels': active_adj_labels,
|
||
'worker_breakdown': worker_breakdown,
|
||
# --- Hero KPI band (executive report v2) ---
|
||
'current_outstanding': _current_outstanding_in_scope(
|
||
project_ids=project_ids, team_ids=team_ids
|
||
),
|
||
'current_as_of': timezone.now(),
|
||
'company_avg_daily': _cv['avg_daily'],
|
||
'company_avg_monthly': _cv['avg_monthly'],
|
||
'company_working_days': _cv['working_days'],
|
||
'team_project_activity': _team_project_activity(work_logs_qs),
|
||
}
|
||
|
||
|
||
def _parse_report_dates(request):
|
||
"""
|
||
Parse report date range from GET params.
|
||
Supports two modes:
|
||
- "from_month" + "to_month" params (e.g. "2026-01" to "2026-03") → Jan 1 to Mar 31
|
||
- "start_date" + "end_date" params → custom range
|
||
Also supports legacy "month" param for backward compatibility.
|
||
Returns (start_date, end_date) as date objects, or (None, None) if invalid.
|
||
"""
|
||
from_month_str = request.GET.get('from_month', '').strip()
|
||
to_month_str = request.GET.get('to_month', '').strip()
|
||
start_str = request.GET.get('start_date', '').strip()
|
||
end_str = request.GET.get('end_date', '').strip()
|
||
# Legacy single month param
|
||
month_str = request.GET.get('month', '').strip()
|
||
|
||
if from_month_str:
|
||
# Multi-month mode: from_month → first day, to_month → last day
|
||
try:
|
||
fy, fm = map(int, from_month_str.split('-'))
|
||
start_date = datetime.date(fy, fm, 1)
|
||
# If to_month is missing, use same as from_month (single month)
|
||
if to_month_str:
|
||
ty, tm = map(int, to_month_str.split('-'))
|
||
else:
|
||
ty, tm = fy, fm
|
||
last_day = cal_module.monthrange(ty, tm)[1]
|
||
end_date = datetime.date(ty, tm, last_day)
|
||
return start_date, end_date
|
||
except (ValueError, TypeError):
|
||
return None, None
|
||
elif month_str:
|
||
# Legacy single month mode
|
||
try:
|
||
year, month = map(int, month_str.split('-'))
|
||
start_date = datetime.date(year, month, 1)
|
||
last_day = cal_module.monthrange(year, month)[1]
|
||
end_date = datetime.date(year, month, last_day)
|
||
return start_date, end_date
|
||
except (ValueError, TypeError):
|
||
return None, None
|
||
elif start_str and end_str:
|
||
# Custom range mode
|
||
try:
|
||
return datetime.date.fromisoformat(start_str), datetime.date.fromisoformat(end_str)
|
||
except ValueError:
|
||
return None, None
|
||
|
||
return None, None
|
||
|
||
|
||
@login_required
|
||
def generate_report(request):
|
||
"""Render on-screen payroll report with filters from GET params."""
|
||
if not is_admin(request.user):
|
||
return HttpResponseForbidden("Admin access required.")
|
||
|
||
# Parse dates — supports both "month" and "start_date/end_date" params
|
||
start_date, end_date = _parse_report_dates(request)
|
||
# Multi-value: ?project=1&project=2 comes in as getlist ['1','2'].
|
||
# Cast to ints; drop empties. None if list is empty (= "no filter").
|
||
def _ids(name):
|
||
return [int(v) for v in request.GET.getlist(name) if v.strip().isdigit()]
|
||
project_ids = _ids('project') or None
|
||
team_ids = _ids('team') or None
|
||
|
||
if not start_date or not end_date:
|
||
messages.error(request, "Please select a month or provide start and end dates.")
|
||
return redirect('home')
|
||
|
||
# Build report data using shared helper
|
||
context = _build_report_context(
|
||
start_date, end_date,
|
||
project_ids=project_ids, team_ids=team_ids,
|
||
)
|
||
# Pass the raw query params so the "Download PDF" button can use them
|
||
context['query_string'] = request.GET.urlencode()
|
||
# === FILTER PILL CLEAR LINKS ===
|
||
# For the filter-pill x buttons: rebuild the querystring with one filter removed.
|
||
# QueryDict.pop() only removes the first occurrence, so for multi-value keys
|
||
# (e.g. project=1&project=2) we follow up with setlist(key, []) to strip them all.
|
||
def _qs_without(key):
|
||
qd = request.GET.copy()
|
||
qd.pop(key, None)
|
||
qd.setlist(key, [])
|
||
return qd.urlencode()
|
||
context['query_string_without_project'] = _qs_without('project')
|
||
context['query_string_without_team'] = _qs_without('team')
|
||
# === Date-scoped pickers + cross-filter ===
|
||
# Admin UX decision (Konrad, 2026-04-23 Checkpoint 1 feedback):
|
||
# The project/team pills should only show entries that actually have
|
||
# WorkLog activity within the currently-selected date range. Same for
|
||
# the (project_id, team_id) pair map that powers the cross-filter.
|
||
# Rationale: "show me teams I'm actually looking at right now," not
|
||
# "every team that ever existed."
|
||
#
|
||
# Guarantee: a project or team that's currently in the URL selection
|
||
# MUST remain in the list — even if it has no logs in this window —
|
||
# so the user can always see and deselect their own picks.
|
||
logs_in_range = WorkLog.objects.filter(
|
||
date__gte=start_date, date__lte=end_date,
|
||
)
|
||
project_ids_in_range = set(
|
||
logs_in_range.values_list('project_id', flat=True).distinct()
|
||
)
|
||
team_ids_in_range = set(
|
||
logs_in_range.values_list('team_id', flat=True).distinct()
|
||
)
|
||
# Logs without a project/team contribute a None — drop it
|
||
project_ids_in_range.discard(None)
|
||
team_ids_in_range.discard(None)
|
||
|
||
# Union with the user's URL selections so picks never vanish
|
||
selected_p_int = {int(x) for x in (project_ids or [])}
|
||
selected_t_int = {int(x) for x in (team_ids or [])}
|
||
project_ids_to_show = project_ids_in_range | selected_p_int
|
||
team_ids_to_show = team_ids_in_range | selected_t_int
|
||
|
||
# Cross-filter pair map, scoped to the same date range
|
||
# (raw Python list — |json_script in the template handles serialisation)
|
||
pairs = list(
|
||
logs_in_range
|
||
.filter(project__isnull=False, team__isnull=False)
|
||
.values('project_id', 'team_id')
|
||
.distinct()
|
||
)
|
||
context['project_team_pairs_json'] = pairs
|
||
|
||
# Picker lists (only projects/teams with activity in this window,
|
||
# union'd with current URL selection)
|
||
context['projects'] = (
|
||
Project.objects.filter(id__in=project_ids_to_show).order_by('name')
|
||
)
|
||
context['teams'] = (
|
||
Team.objects.filter(id__in=team_ids_to_show).order_by('name')
|
||
)
|
||
# Template's `{% if p.id|stringformat:"s" in selected_project_ids %}`
|
||
# comparison needs strings on both sides.
|
||
context['selected_project_ids'] = [str(p) for p in (project_ids or [])]
|
||
context['selected_team_ids'] = [str(t) for t in (team_ids or [])]
|
||
|
||
return render(request, 'core/report.html', context)
|
||
|
||
|
||
@login_required
|
||
def generate_report_pdf(request):
|
||
"""Generate a PDF version of the payroll report (same data as HTML view)."""
|
||
if not is_admin(request.user):
|
||
return HttpResponseForbidden("Admin access required.")
|
||
|
||
from .utils import render_to_pdf
|
||
|
||
# Parse dates — same logic as the HTML view
|
||
start_date, end_date = _parse_report_dates(request)
|
||
# Multi-value: ?project=1&project=2 comes in as getlist ['1','2'].
|
||
# Cast to ints; drop empties. None if list is empty (= "no filter").
|
||
def _ids(name):
|
||
return [int(v) for v in request.GET.getlist(name) if v.strip().isdigit()]
|
||
project_ids = _ids('project') or None
|
||
team_ids = _ids('team') or None
|
||
|
||
if not start_date or not end_date:
|
||
messages.error(request, "Please select a month or provide start and end dates.")
|
||
return redirect('home')
|
||
|
||
context = _build_report_context(
|
||
start_date, end_date,
|
||
project_ids=project_ids, team_ids=team_ids,
|
||
)
|
||
context['now'] = timezone.now()
|
||
|
||
pdf = render_to_pdf('core/pdf/report_pdf.html', context)
|
||
if pdf:
|
||
response = HttpResponse(pdf, content_type='application/pdf')
|
||
filename = f"payroll_report_{start_date}_{end_date}.pdf"
|
||
response['Content-Disposition'] = f'attachment; filename="{filename}"'
|
||
return response
|
||
|
||
messages.error(request, "PDF generation failed. xhtml2pdf may not be installed.")
|
||
return redirect('home')
|
||
|
||
|
||
# === TOGGLE RESOURCE STATUS (AJAX) ===
|
||
# Called by the toggle switches on the dashboard to activate/deactivate
|
||
# workers, projects, or teams without reloading the page.
|
||
|
||
@login_required
|
||
def toggle_active(request, model_name, item_id):
|
||
if request.method != 'POST':
|
||
return HttpResponseForbidden("Only POST requests are allowed.")
|
||
|
||
if not is_admin(request.user):
|
||
return HttpResponseForbidden("Not authorized.")
|
||
|
||
# Map URL parameter to the correct model class
|
||
model_map = {
|
||
'worker': Worker,
|
||
'project': Project,
|
||
'team': Team
|
||
}
|
||
|
||
if model_name not in model_map:
|
||
return JsonResponse({'error': 'Invalid model'}, status=400)
|
||
|
||
model = model_map[model_name]
|
||
try:
|
||
item = model.objects.get(id=item_id)
|
||
item.active = not item.active
|
||
item.save()
|
||
return JsonResponse({
|
||
'status': 'success',
|
||
'active': item.active,
|
||
'message': f'{item.name} is now {"active" if item.active else "inactive"}.'
|
||
})
|
||
except model.DoesNotExist:
|
||
return JsonResponse({'error': 'Item not found'}, status=404)
|
||
|
||
|
||
# =============================================================================
|
||
# === ADJUSTMENT GROUPING HELPER ===
|
||
# Used by the Adjustments tab's By Type / By Worker render path.
|
||
# Plain-English: takes a flat list of PayrollAdjustment rows and regroups
|
||
# them into buckets keyed by adjustment type or by worker. The result is
|
||
# a list of group-dicts the template can iterate, each carrying a label,
|
||
# CSS-friendly slug, the list of rows in the bucket, a count, and the
|
||
# net signed sum of amounts (additives count +, deductives count -).
|
||
# =============================================================================
|
||
|
||
def _group_adjustments(adjustments, group_by):
|
||
"""Regroup a flat list/queryset of PayrollAdjustment into buckets.
|
||
|
||
`group_by` is 'type' or 'worker'. Returns a list of dicts:
|
||
{'label', 'slug', 'rows', 'count', 'net_sum'}
|
||
|
||
Ordered by descending magnitude of net_sum so the highest-impact
|
||
bucket sits at the top of the view (big groups first).
|
||
"""
|
||
from collections import defaultdict
|
||
buckets = defaultdict(list)
|
||
for adj in adjustments:
|
||
key = adj.type if group_by == 'type' else adj.worker_id
|
||
buckets[key].append(adj)
|
||
|
||
groups = []
|
||
for key, rows in buckets.items():
|
||
if group_by == 'type':
|
||
# Visible header text uses the short display label (e.g. "Loan",
|
||
# "Advance", "Advance Repaid") from the model's TYPE_CHOICES.
|
||
label = rows[0].get_type_display()
|
||
# type_key holds the raw DB value so the template can emit it as
|
||
# data-type="..." for the [data-type="X"] CSS border-left accent
|
||
# selectors that still key on the canonical DB value.
|
||
type_key = key
|
||
slug = key.lower().replace(' ', '-')
|
||
else: # worker
|
||
label = rows[0].worker.name
|
||
type_key = None
|
||
slug = f'worker-{key}'
|
||
net_sum = sum(
|
||
(r.amount if r.type in ADDITIVE_TYPES else -r.amount)
|
||
for r in rows
|
||
)
|
||
groups.append({
|
||
'label': label,
|
||
'type_key': type_key,
|
||
'slug': slug,
|
||
'rows': rows,
|
||
'count': len(rows),
|
||
'net_sum': net_sum,
|
||
})
|
||
groups.sort(key=lambda g: -abs(g['net_sum']))
|
||
return groups
|
||
|
||
|
||
# =============================================================================
|
||
# === PAYROLL DASHBOARD ===
|
||
# The main payroll page. Shows per-worker breakdown of what's owed,
|
||
# adjustment management, payment processing, and Chart.js analytics.
|
||
# Admin-only — supervisors cannot access this page.
|
||
# =============================================================================
|
||
|
||
@login_required
|
||
def payroll_dashboard(request):
|
||
if not is_admin(request.user):
|
||
messages.error(request, 'Only admins can access the payroll dashboard.')
|
||
return redirect('home')
|
||
|
||
status_filter = request.GET.get('status', 'pending')
|
||
|
||
# --- Per-worker pending payment data ---
|
||
# For each active worker, calculate: unpaid days × daily_rate + net adjustments
|
||
active_workers = Worker.objects.filter(active=True).prefetch_related(
|
||
Prefetch('work_logs', queryset=WorkLog.objects.prefetch_related(
|
||
'payroll_records', 'priced_workers'
|
||
).select_related('project')),
|
||
Prefetch('adjustments', queryset=PayrollAdjustment.objects.filter(
|
||
payroll_record__isnull=True
|
||
).select_related('project', 'loan', 'work_log'),
|
||
to_attr='pending_adjustments_list'),
|
||
).order_by('name')
|
||
|
||
workers_data = []
|
||
outstanding_total = Decimal('0.00')
|
||
# === OUTSTANDING BREAKDOWN (same as home dashboard) ===
|
||
unpaid_wages_total = Decimal('0.00') # Pure daily rates for unpaid workers
|
||
pending_adj_add_total = Decimal('0.00') # Unpaid additive adjustments
|
||
pending_adj_sub_total = Decimal('0.00') # Unpaid deductive adjustments
|
||
all_ot_data = [] # For the Price Overtime modal
|
||
|
||
# === PRE-COMPUTED LOOKUPS — avoid per-worker SELECTs in the loop below ===
|
||
# Previously the loop fired:
|
||
# - one `Loan.objects.filter(worker=w, active=True).exists()` per worker
|
||
# - one `worker.teams.filter(active=True).first()` per worker (via
|
||
# get_worker_active_team) — which fires a fresh SELECT even though
|
||
# active_workers was prefetched, because `.filter()` bypasses the
|
||
# prefetch cache.
|
||
# We batch both into dict lookups keyed by worker_id.
|
||
workers_with_active_loan = set(
|
||
Loan.objects.filter(active=True).values_list('worker_id', flat=True).distinct()
|
||
)
|
||
# Map worker_id → first active Team instance (mirrors get_worker_active_team).
|
||
# We load every active team once, then walk the through-table to find the
|
||
# first active team per worker.
|
||
active_team_by_id = {t.id: t for t in Team.objects.filter(active=True)}
|
||
worker_active_team = {}
|
||
for membership in Team.workers.through.objects.filter(
|
||
team_id__in=active_team_by_id.keys()
|
||
).values('team_id', 'worker_id'):
|
||
wid = membership['worker_id']
|
||
if wid in worker_active_team:
|
||
continue
|
||
worker_active_team[wid] = active_team_by_id[membership['team_id']]
|
||
|
||
for worker in active_workers:
|
||
# Find unpaid work logs for this worker.
|
||
# A log is "unpaid for this worker" if no PayrollRecord links
|
||
# to BOTH this log AND this worker.
|
||
unpaid_logs = []
|
||
for log in worker.work_logs.all():
|
||
paid_worker_ids = {pr.worker_id for pr in log.payroll_records.all()}
|
||
if worker.id not in paid_worker_ids:
|
||
unpaid_logs.append(log)
|
||
|
||
log_count = len(unpaid_logs)
|
||
log_amount = log_count * worker.daily_rate
|
||
|
||
# Find unpriced overtime in unpaid logs
|
||
ot_data_worker = []
|
||
for log in unpaid_logs:
|
||
if log.overtime_amount > 0:
|
||
priced_ids = {w.id for w in log.priced_workers.all()}
|
||
if worker.id not in priced_ids:
|
||
ot_entry = {
|
||
'worker_id': worker.id,
|
||
'worker_name': worker.name,
|
||
'log_id': log.id,
|
||
'date': log.date.strftime('%Y-%m-%d'),
|
||
'project': log.project.name,
|
||
'overtime': float(log.overtime_amount),
|
||
'ot_label': log.get_overtime_amount_display(),
|
||
}
|
||
ot_data_worker.append(ot_entry)
|
||
all_ot_data.append(ot_entry)
|
||
|
||
# Calculate net adjustment amount
|
||
pending_adjs = worker.pending_adjustments_list
|
||
adj_total = Decimal('0.00')
|
||
worker_adj_add = Decimal('0.00')
|
||
worker_adj_sub = Decimal('0.00')
|
||
for adj in pending_adjs:
|
||
if adj.type in ADDITIVE_TYPES:
|
||
adj_total += adj.amount
|
||
worker_adj_add += adj.amount
|
||
elif adj.type in DEDUCTIVE_TYPES:
|
||
adj_total -= adj.amount
|
||
worker_adj_sub += adj.amount
|
||
|
||
total_payable = log_amount + adj_total
|
||
|
||
# Only include workers who have something pending
|
||
if log_count > 0 or pending_adjs:
|
||
# --- Overdue detection ---
|
||
# A worker is "overdue" if they have unpaid work from a completed pay period.
|
||
# Uses their team's pay schedule to determine the cutoff date.
|
||
# PERF: team lookup via pre-computed dict (no per-worker SELECT).
|
||
team = worker_active_team.get(worker.id)
|
||
team_name = team.name if team else ''
|
||
earliest_unpaid = min((l.date for l in unpaid_logs), default=None) if unpaid_logs else None
|
||
is_overdue = False
|
||
if earliest_unpaid and team and team.pay_frequency and team.pay_start_date:
|
||
period_start, period_end = get_pay_period(team)
|
||
if period_start:
|
||
cutoff = period_start - datetime.timedelta(days=1)
|
||
is_overdue = earliest_unpaid <= cutoff
|
||
|
||
# PERF: loan membership via pre-computed set (no per-worker SELECT).
|
||
has_loan = worker.id in workers_with_active_loan
|
||
|
||
# Most recent project — used by the "Adjust" button to pre-select project
|
||
last_project_id = unpaid_logs[-1].project_id if unpaid_logs else None
|
||
|
||
workers_data.append({
|
||
'worker': worker,
|
||
'unpaid_count': log_count,
|
||
'unpaid_amount': log_amount,
|
||
'adj_amount': adj_total,
|
||
'total_payable': total_payable,
|
||
'adjustments': pending_adjs,
|
||
'logs': unpaid_logs,
|
||
'ot_data': ot_data_worker,
|
||
'day_rate': float(worker.daily_rate),
|
||
'team_name': team_name,
|
||
'is_overdue': is_overdue,
|
||
'has_loan': has_loan,
|
||
'earliest_unpaid': earliest_unpaid,
|
||
'last_project_id': last_project_id,
|
||
})
|
||
outstanding_total += max(total_payable, Decimal('0.00'))
|
||
unpaid_wages_total += log_amount
|
||
pending_adj_add_total += worker_adj_add
|
||
pending_adj_sub_total += worker_adj_sub
|
||
|
||
# --- Payment history ---
|
||
paid_records = PayrollRecord.objects.select_related(
|
||
'worker'
|
||
).order_by('-date', '-id')
|
||
|
||
# --- Recent payments total (last 60 days) ---
|
||
sixty_days_ago = timezone.now().date() - timezone.timedelta(days=60)
|
||
recent_payments_total = PayrollRecord.objects.filter(
|
||
date__gte=sixty_days_ago
|
||
).aggregate(total=Sum('amount_paid'))['total'] or Decimal('0.00')
|
||
|
||
# --- Outstanding cost per project ---
|
||
# Check per-worker: a WorkLog is "unpaid for worker X" if no PayrollRecord
|
||
# links BOTH that log AND that worker. This handles partially-paid logs.
|
||
#
|
||
# PERF: materialise the active-project list once and reuse it for both
|
||
# the outstanding-costs loop and the chart-data loop below. Previously
|
||
# each loop re-queried `Project.objects.filter(active=True)`, firing the
|
||
# same SELECT twice per dashboard render.
|
||
active_projects_list = list(Project.objects.filter(active=True))
|
||
active_project_ids = [p.id for p in active_projects_list]
|
||
|
||
# === CHART DATE-WINDOW SETUP (moved up so the batched queries below can
|
||
# also use it) ===
|
||
today = timezone.now().date()
|
||
chart_months = []
|
||
for i in range(5, -1, -1):
|
||
m = today.month - i
|
||
y = today.year
|
||
while m <= 0:
|
||
m += 12
|
||
y -= 1
|
||
chart_months.append((y, m))
|
||
|
||
chart_labels = [
|
||
datetime.date(y, m, 1).strftime('%b %Y') for y, m in chart_months
|
||
]
|
||
six_months_ago_date = datetime.date(chart_months[0][0], chart_months[0][1], 1)
|
||
|
||
# === BATCHED AGGREGATES: one SQL query per concept instead of per-project ===
|
||
# Previously we looped over each active project and issued:
|
||
# - 1 SELECT of WorkLog (with workers prefetch) per project
|
||
# - 1 SELECT of PayrollAdjustment (unpaid) per project
|
||
# - 1 SELECT of WorkLog (workers prefetch) per project × 6 months
|
||
# - 1 SELECT of PayrollAdjustment (paid) per project × 6 months
|
||
# On a ~7-project dataset that's ~7+7+42+42 ≈ 98 SQL round-trips.
|
||
# The rewrite replaces those with 4 GROUP-BY queries that return
|
||
# project_id (and month, where relevant) → total, plus one query for
|
||
# per-log paid-worker sets.
|
||
|
||
# --- 1. Unpaid-work-log cost per project ---
|
||
# We can't do pure SQL aggregation for this because a WorkLog can be
|
||
# partially paid (one worker of two). We still need per-log inspection,
|
||
# BUT we can load all unpaid-or-partially-paid logs + their workers +
|
||
# payroll_records in a bounded set of queries using prefetch_related
|
||
# rather than looping one project at a time.
|
||
project_outstanding_map = {pid: Decimal('0.00') for pid in active_project_ids}
|
||
|
||
all_project_logs = WorkLog.objects.filter(
|
||
project_id__in=active_project_ids
|
||
).prefetch_related('payroll_records', 'workers')
|
||
for log in all_project_logs:
|
||
paid_worker_ids = {pr.worker_id for pr in log.payroll_records.all()}
|
||
for w in log.workers.all():
|
||
if w.id not in paid_worker_ids:
|
||
project_outstanding_map[log.project_id] += w.daily_rate
|
||
|
||
# --- 2. Unpaid-adjustment net per project (ONE GROUP BY via Coalesce) ---
|
||
# Each unpaid adjustment is attributed to exactly ONE project: its
|
||
# direct FK (`project_id`) if set, otherwise its work_log's project.
|
||
# This mirrors the original OR-filter semantics
|
||
# (`Q(project=P) | Q(work_log__project=P)`) which naturally dedupes:
|
||
# a row with BOTH FKs pointing at the same project matches ONCE.
|
||
#
|
||
# Why Coalesce matters: every Overtime adjustment is created by
|
||
# price_overtime() with BOTH adj.project AND adj.work_log.project set
|
||
# to the same project. A naive "filter by direct FK + filter by
|
||
# work_log FK + sum both maps in Python" approach double-counts
|
||
# every Overtime row. Coalesce picks ONE effective project per row
|
||
# so each amount contributes to the outstanding-costs card once.
|
||
unpaid_adj_rows = (
|
||
PayrollAdjustment.objects
|
||
.filter(payroll_record__isnull=True)
|
||
.filter(
|
||
Q(project_id__in=active_project_ids)
|
||
| Q(work_log__project_id__in=active_project_ids)
|
||
)
|
||
.annotate(
|
||
effective_project_id=Coalesce('project_id', 'work_log__project_id')
|
||
)
|
||
.values('effective_project_id', 'type')
|
||
.annotate(total=Sum('amount'))
|
||
)
|
||
for row in unpaid_adj_rows:
|
||
pid = row['effective_project_id']
|
||
# Only contribute to active projects we're tracking.
|
||
if pid not in project_outstanding_map:
|
||
continue
|
||
total = row['total'] or Decimal('0.00')
|
||
if row['type'] in ADDITIVE_TYPES:
|
||
project_outstanding_map[pid] += total
|
||
elif row['type'] in DEDUCTIVE_TYPES:
|
||
project_outstanding_map[pid] -= total
|
||
|
||
outstanding_project_costs = []
|
||
for project in active_projects_list:
|
||
cost = project_outstanding_map[project.id]
|
||
if cost != 0:
|
||
outstanding_project_costs.append({
|
||
'name': project.name,
|
||
'cost': cost,
|
||
})
|
||
|
||
# Monthly payroll totals
|
||
paid_by_month_qs = PayrollRecord.objects.annotate(
|
||
month=TruncMonth('date')
|
||
).values('month').annotate(total=Sum('amount_paid')).order_by('month')
|
||
paid_by_month = {
|
||
(r['month'].year, r['month'].month): float(r['total'])
|
||
for r in paid_by_month_qs
|
||
}
|
||
chart_totals = [paid_by_month.get((y, m), 0) for y, m in chart_months]
|
||
|
||
# --- 3. Per-project × per-month work-log cost (for stacked bar chart) ---
|
||
# Aggregate worker×log rows directly in SQL: one GROUP BY
|
||
# (project_id, year, month) returns all we need.
|
||
project_month_wage = {
|
||
(pid, y, m): Decimal('0.00')
|
||
for pid in active_project_ids for y, m in chart_months
|
||
}
|
||
wage_rows = WorkLog.objects.filter(
|
||
project_id__in=active_project_ids,
|
||
date__gte=six_months_ago_date,
|
||
).annotate(month=TruncMonth('date')).values(
|
||
'project_id', 'month', 'workers__monthly_salary'
|
||
).annotate(worker_count=Count('workers'))
|
||
# Each row = one (project, month, distinct salary) with how many workers
|
||
# at that salary were logged. Multiply by daily_rate (salary / 20) × count.
|
||
for row in wage_rows:
|
||
salary = row['workers__monthly_salary']
|
||
if salary is None:
|
||
continue
|
||
key = (row['project_id'], row['month'].year, row['month'].month)
|
||
if key not in project_month_wage:
|
||
continue
|
||
daily = Decimal(salary) / Decimal('20.00')
|
||
project_month_wage[key] += daily * row['worker_count']
|
||
|
||
# --- 4. Per-project × per-month paid-adjustment net (ONE GROUP BY) ---
|
||
# Same Coalesce trick as site #2: pick ONE effective project per
|
||
# adjustment row so we don't double-count the stacked-chart bars
|
||
# for Overtime (which always has both FKs pointing at the same
|
||
# project — see note on the unpaid block above).
|
||
paid_adj_rows = (
|
||
PayrollAdjustment.objects
|
||
.filter(payroll_record__isnull=False, date__gte=six_months_ago_date)
|
||
.filter(
|
||
Q(project_id__in=active_project_ids)
|
||
| Q(work_log__project_id__in=active_project_ids)
|
||
)
|
||
.annotate(
|
||
effective_project_id=Coalesce('project_id', 'work_log__project_id'),
|
||
month=TruncMonth('date'),
|
||
)
|
||
.values('effective_project_id', 'month', 'type')
|
||
.annotate(total=Sum('amount'))
|
||
)
|
||
# Accumulate add/sub per (project, year, month) keys in Python.
|
||
paid_adj_add = {}
|
||
paid_adj_sub = {}
|
||
for row in paid_adj_rows:
|
||
pid = row['effective_project_id']
|
||
if pid not in project_outstanding_map: # only active projects
|
||
continue
|
||
key = (pid, row['month'].year, row['month'].month)
|
||
total = row['total'] or Decimal('0.00')
|
||
if row['type'] in ADDITIVE_TYPES:
|
||
paid_adj_add[key] = paid_adj_add.get(key, Decimal('0.00')) + total
|
||
elif row['type'] in DEDUCTIVE_TYPES:
|
||
paid_adj_sub[key] = paid_adj_sub.get(key, Decimal('0.00')) + total
|
||
|
||
project_chart_data = []
|
||
for project in active_projects_list:
|
||
monthly_data = []
|
||
for y, m in chart_months:
|
||
key = (project.id, y, m)
|
||
month_cost = project_month_wage.get(key, Decimal('0.00'))
|
||
month_cost += paid_adj_add.get(key, Decimal('0.00'))
|
||
month_cost -= paid_adj_sub.get(key, Decimal('0.00'))
|
||
monthly_data.append(float(month_cost))
|
||
if any(v > 0 for v in monthly_data):
|
||
project_chart_data.append({
|
||
'name': project.name,
|
||
'data': monthly_data,
|
||
})
|
||
|
||
# === CHART DATA: Per-Worker Monthly Breakdown ===
|
||
# Pre-compute payment breakdown for each active worker over the last 6 months.
|
||
# This powers the "By Worker" toggle on the Monthly Payroll Totals chart.
|
||
# Only ~14 workers x 6 months = tiny dataset, so we embed it all as JSON
|
||
# and switching between workers is instant (no server round-trips).
|
||
#
|
||
# `six_months_ago_date` is already defined above (hoisted next to the
|
||
# date-window setup) and reused here.
|
||
|
||
# Query 1: Total amount paid per worker per month.
|
||
# Uses database-level grouping — one query for ALL workers at once.
|
||
worker_monthly_paid_qs = PayrollRecord.objects.filter(
|
||
worker__active=True,
|
||
date__gte=six_months_ago_date,
|
||
).values(
|
||
'worker_id',
|
||
month=TruncMonth('date'),
|
||
).annotate(total=Sum('amount_paid'))
|
||
|
||
# Build a fast lookup dict: {(worker_id, year, month): total_paid}
|
||
worker_paid_lookup = {}
|
||
for row in worker_monthly_paid_qs:
|
||
key = (row['worker_id'], row['month'].year, row['month'].month)
|
||
worker_paid_lookup[key] = float(row['total'])
|
||
|
||
# Query 2: Paid adjustment totals grouped by worker, type, and month.
|
||
# "Paid" means the adjustment has a linked PayrollRecord.
|
||
# We group by the PayrollRecord's date (not the adjustment date)
|
||
# so it lines up with when the payment actually happened.
|
||
worker_monthly_adj_qs = PayrollAdjustment.objects.filter(
|
||
payroll_record__isnull=False,
|
||
worker__active=True,
|
||
payroll_record__date__gte=six_months_ago_date,
|
||
).values(
|
||
'worker_id',
|
||
'type',
|
||
month=TruncMonth('payroll_record__date'),
|
||
).annotate(total=Sum('amount'))
|
||
|
||
# Build a fast lookup dict: {(worker_id, year, month, type): total_amount}
|
||
worker_adj_lookup = {}
|
||
for row in worker_monthly_adj_qs:
|
||
key = (row['worker_id'], row['month'].year, row['month'].month, row['type'])
|
||
worker_adj_lookup[key] = float(row['total'])
|
||
|
||
# Build the final data structure for JavaScript.
|
||
# For each worker with payment history, create 6 monthly entries showing
|
||
# how their pay breaks down into base pay, overtime, bonuses, etc.
|
||
#
|
||
# Base pay is reverse-engineered from the net total:
|
||
# amount_paid = base + overtime + bonus + new_loan - deduction - loan_repayment - advance
|
||
# So: base = amount_paid - overtime - bonus - new_loan + deduction + loan_repayment + advance
|
||
#
|
||
# PERF: reuse `active_workers` (already loaded + cached at the top of the
|
||
# function) instead of re-querying Worker.objects.filter(active=True).
|
||
# Same ordered row-set; saves an SQL round-trip. The unused prefetches
|
||
# on `active_workers` are already materialised so they cost nothing extra.
|
||
worker_chart_data = {}
|
||
for worker in active_workers:
|
||
months_data = []
|
||
has_any_data = False
|
||
|
||
for y, m in chart_months:
|
||
total_paid = worker_paid_lookup.get((worker.id, y, m), 0)
|
||
overtime = worker_adj_lookup.get((worker.id, y, m, 'Overtime'), 0)
|
||
bonus = worker_adj_lookup.get((worker.id, y, m, 'Bonus'), 0)
|
||
new_loan = worker_adj_lookup.get((worker.id, y, m, 'New Loan'), 0)
|
||
deduction = worker_adj_lookup.get((worker.id, y, m, 'Deduction'), 0)
|
||
loan_repayment = worker_adj_lookup.get((worker.id, y, m, 'Loan Repayment'), 0)
|
||
advance = worker_adj_lookup.get((worker.id, y, m, 'Advance Payment'), 0)
|
||
|
||
# Reverse-engineer base pay from the net total
|
||
base_pay = total_paid - overtime - bonus - new_loan + deduction + loan_repayment + advance
|
||
# Clamp to zero — a negative base can happen if adjustments exceed day-rate earnings
|
||
base_pay = max(base_pay, 0)
|
||
|
||
if total_paid > 0:
|
||
has_any_data = True
|
||
|
||
months_data.append({
|
||
'base': round(base_pay, 2),
|
||
'overtime': round(overtime, 2),
|
||
'bonus': round(bonus, 2),
|
||
'new_loan': round(new_loan, 2),
|
||
'deduction': round(deduction, 2),
|
||
'loan_repayment': round(loan_repayment, 2),
|
||
'advance': round(advance, 2),
|
||
'total': round(total_paid, 2),
|
||
})
|
||
|
||
# Only include workers who actually received at least one payment
|
||
if has_any_data:
|
||
worker_chart_data[str(worker.id)] = {
|
||
'name': worker.name,
|
||
'months': months_data,
|
||
}
|
||
|
||
# --- Loans ---
|
||
loan_filter = request.GET.get('loan_status', 'active')
|
||
if loan_filter == 'history':
|
||
loans = Loan.objects.filter(active=False).select_related('worker').order_by('-date')
|
||
else:
|
||
loans = Loan.objects.filter(active=True).select_related('worker').order_by('-date')
|
||
|
||
# Total active loan balance (always shown in analytics card, regardless of tab)
|
||
active_loans = Loan.objects.filter(active=True)
|
||
active_loans_count = active_loans.count()
|
||
active_loans_balance = active_loans.aggregate(
|
||
total=Sum('remaining_balance')
|
||
)['total'] or Decimal('0.00')
|
||
|
||
# --- Active projects and workers for modal dropdowns ---
|
||
# `active_workers` is reused (already loaded + evaluated by the workers_data
|
||
# loop). For the modal-dropdown context key we alias it as `all_workers`
|
||
# so the template name stays descriptive.
|
||
all_workers = active_workers
|
||
active_projects = Project.objects.filter(active=True).order_by('name')
|
||
all_teams = Team.objects.filter(active=True).prefetch_related(
|
||
# PERF: prefetch only the active workers so the template's
|
||
# `team.workers.all` (and our map below) already filters to active
|
||
# without re-querying. Using `.filter()` on the plain `workers`
|
||
# accessor bypasses Django's prefetch cache and fires one SELECT
|
||
# per team — an N+1 we need to avoid.
|
||
Prefetch('workers', queryset=Worker.objects.filter(active=True), to_attr='active_workers_cached')
|
||
).order_by('name')
|
||
|
||
# Team-workers map for auto-selecting workers when a team is picked.
|
||
# Uses the prefetched `active_workers_cached` list — no extra queries.
|
||
team_workers_map = {}
|
||
for team in all_teams:
|
||
team_workers_map[str(team.id)] = [w.id for w in team.active_workers_cached]
|
||
|
||
# NOTE: Pass raw Python objects here, NOT json.dumps() strings.
|
||
# The template uses Django's |json_script filter which handles
|
||
# JSON serialization. If we pre-serialize with json.dumps(), the
|
||
# filter double-encodes the data and JavaScript receives strings
|
||
# instead of arrays/objects, which crashes the entire script.
|
||
context = {
|
||
'workers_data': workers_data,
|
||
'paid_records': paid_records,
|
||
'outstanding_total': outstanding_total,
|
||
'unpaid_wages_total': unpaid_wages_total,
|
||
'pending_adj_add_total': pending_adj_add_total,
|
||
'pending_adj_sub_total': pending_adj_sub_total,
|
||
'recent_payments_total': recent_payments_total,
|
||
'outstanding_project_costs': outstanding_project_costs,
|
||
'active_tab': status_filter,
|
||
'all_workers': all_workers,
|
||
'all_teams': all_teams,
|
||
'team_workers_map_json': team_workers_map,
|
||
'adjustment_types': PayrollAdjustment.TYPE_CHOICES,
|
||
# List of type labels that ADD to a worker's pay (Bonus, Overtime,
|
||
# New Loan, Advance Payment). Used by the Pending and History tabs'
|
||
# adjustment badges to show a + or - sign next to the amount.
|
||
'additive_types': list(ADDITIVE_TYPES),
|
||
'active_projects': active_projects,
|
||
'loans': loans,
|
||
'loan_filter': loan_filter,
|
||
'chart_labels_json': chart_labels,
|
||
'chart_totals_json': chart_totals,
|
||
'project_chart_json': project_chart_data,
|
||
'worker_chart_json': worker_chart_data,
|
||
'overtime_data_json': all_ot_data,
|
||
'today': today, # For pre-filling date fields in modals
|
||
'active_loans_count': active_loans_count,
|
||
'active_loans_balance': active_loans_balance,
|
||
}
|
||
|
||
# =========================================================================
|
||
# === ADJUSTMENTS TAB CONTEXT ===
|
||
# This block only runs when the user is on the Adjustments tab
|
||
# (i.e. the URL has ?status=adjustments). It builds a filtered, sorted,
|
||
# paginated list of adjustments plus the little stats cards above it.
|
||
#
|
||
# Group-by rendering, bulk-select, and Team->Workers cross-filter
|
||
# will be added in later tasks — this task just covers the basic data.
|
||
# =========================================================================
|
||
if status_filter == 'adjustments':
|
||
from django.core.paginator import Paginator
|
||
from django.utils.dateparse import parse_date
|
||
|
||
# --- Read the filter choices the user picked from the URL ---
|
||
# Lists come in as ?type=Bonus&type=Deduction etc.
|
||
type_filter = request.GET.getlist('type')
|
||
worker_filter = [
|
||
int(v) for v in request.GET.getlist('worker') if v.strip().isdigit()
|
||
]
|
||
team_filter = [
|
||
int(v) for v in request.GET.getlist('team') if v.strip().isdigit()
|
||
]
|
||
adj_status = request.GET.get('adj_status', '').strip()
|
||
adj_date_from = request.GET.get('adj_date_from', '').strip()
|
||
adj_date_to = request.GET.get('adj_date_to', '').strip()
|
||
sort_col = request.GET.get('sort', 'date').strip()
|
||
sort_order = request.GET.get('order', 'desc').strip()
|
||
|
||
# --- Base queryset with eager-loading of related tables ---
|
||
# select_related pulls worker/project/payment in the same SQL query
|
||
# so we don't hit the database once per row later.
|
||
adjustments = PayrollAdjustment.objects.select_related(
|
||
'worker', 'project', 'payroll_record'
|
||
).prefetch_related('worker__teams')
|
||
|
||
# --- Apply each filter only if the user actually set one ---
|
||
if type_filter:
|
||
adjustments = adjustments.filter(type__in=type_filter)
|
||
if worker_filter:
|
||
adjustments = adjustments.filter(worker_id__in=worker_filter)
|
||
if team_filter:
|
||
# SUBQUERY PATTERN (CLAUDE.md "M2M filter + aggregate inflation"):
|
||
# Joining straight on workers__teams would multiply the row count
|
||
# if a worker is on multiple teams, so we pick the matching worker
|
||
# IDs in a subquery first and then filter the outer queryset
|
||
# without any JOIN expansion.
|
||
adjustments = adjustments.filter(
|
||
worker__in=Worker.objects.filter(
|
||
teams__id__in=team_filter
|
||
).values('id')
|
||
)
|
||
if adj_status == 'unpaid':
|
||
adjustments = adjustments.filter(payroll_record__isnull=True)
|
||
elif adj_status == 'paid':
|
||
adjustments = adjustments.filter(payroll_record__isnull=False)
|
||
if adj_date_from:
|
||
parsed = parse_date(adj_date_from)
|
||
if parsed:
|
||
adjustments = adjustments.filter(date__gte=parsed)
|
||
if adj_date_to:
|
||
parsed = parse_date(adj_date_to)
|
||
if parsed:
|
||
adjustments = adjustments.filter(date__lte=parsed)
|
||
|
||
# --- Sort the results ---
|
||
# The URL's "sort" value is a short label; translate it to the
|
||
# actual database column. Unknown values fall back to date.
|
||
sort_map = {
|
||
'date': 'date',
|
||
'worker': 'worker__name',
|
||
'amount': 'amount',
|
||
'status': 'payroll_record',
|
||
}
|
||
sort_field = sort_map.get(sort_col, 'date')
|
||
if sort_order == 'desc':
|
||
sort_field = '-' + sort_field
|
||
# Secondary key "-id" keeps rows in a stable order when the
|
||
# main sort key has ties (e.g. two adjustments on the same date).
|
||
adjustments = adjustments.order_by(sort_field, '-id')
|
||
|
||
# --- Pagination: 50 rows per page (flat view only) ---
|
||
# PERF: build the paginator first so we can reuse its cached `count`
|
||
# for the "Total adjustments" stat card below — avoids a duplicate
|
||
# `SELECT COUNT(*) FROM core_payrolladjustment`.
|
||
paginator = Paginator(adjustments, 50)
|
||
adj_page = paginator.get_page(request.GET.get('page', 1))
|
||
|
||
# --- Stats cards (all computed BEFORE pagination cuts the rows) ---
|
||
# These numbers always reflect what the current filter produces,
|
||
# not just what fits on the current page.
|
||
adj_total_count = paginator.count
|
||
unpaid_qs = adjustments.filter(payroll_record__isnull=True)
|
||
adj_unpaid_count = unpaid_qs.count()
|
||
adj_unpaid_sum = unpaid_qs.aggregate(
|
||
total=Sum('amount')
|
||
)['total'] or Decimal('0.00')
|
||
adj_additive_sum = adjustments.filter(
|
||
type__in=ADDITIVE_TYPES
|
||
).aggregate(total=Sum('amount'))['total'] or Decimal('0.00')
|
||
adj_deductive_sum = adjustments.filter(
|
||
type__in=DEDUCTIVE_TYPES
|
||
).aggregate(total=Sum('amount'))['total'] or Decimal('0.00')
|
||
|
||
# --- Group-by rendering (optional; None = flat view) ---
|
||
# When the user clicks the "By Type" or "By Worker" toggle above
|
||
# the table, we bucket the FULL filtered queryset (not the paginated
|
||
# slice) so each group's row-count and net-sum reflect the whole
|
||
# filter, not just whatever landed on this page. Pagination is
|
||
# suppressed in the template when grouped (the group headers act
|
||
# as their own navigation).
|
||
group_by = request.GET.get('group_by', '').strip()
|
||
adj_groups = None
|
||
if group_by in ('type', 'worker'):
|
||
adj_groups = _group_adjustments(list(adjustments), group_by)
|
||
|
||
# --- Everything the Adjustments tab template will need ---
|
||
context.update({
|
||
'adj_page': adj_page,
|
||
'adj_groups': adj_groups,
|
||
'adj_total_count': adj_total_count,
|
||
'adj_unpaid_count': adj_unpaid_count,
|
||
'adj_unpaid_sum': adj_unpaid_sum,
|
||
'adj_additive_sum': adj_additive_sum,
|
||
'adj_deductive_sum': adj_deductive_sum,
|
||
'adj_filter_values': {
|
||
'type': type_filter,
|
||
'worker': worker_filter,
|
||
'team': team_filter,
|
||
'adj_status': adj_status,
|
||
'adj_date_from': adj_date_from,
|
||
'adj_date_to': adj_date_to,
|
||
'sort': sort_col,
|
||
'order': sort_order,
|
||
'group_by': group_by,
|
||
},
|
||
# (db_value, display_label) pairs for the Type filter popover on the
|
||
# Adjustments tab. Uses TYPE_CHOICES directly so the checkbox labels
|
||
# show the short display labels (Loan / Advance / Advance Repaid)
|
||
# while checkbox values stay on the DB value (which the view filters
|
||
# by). Stored under a separate key so we don't clobber the existing
|
||
# 'adjustment_types' context var (also TYPE_CHOICES tuples, used by
|
||
# the Add/Edit adjustment modals).
|
||
'adj_type_choices': PayrollAdjustment.TYPE_CHOICES,
|
||
# PERF: reuse `all_workers`/`all_teams` (already cached above for
|
||
# the Add-Adjustment modal) — same row-set, same ordering, so no
|
||
# need to re-query the database for the filter popovers.
|
||
'all_workers_for_filter': all_workers,
|
||
'all_teams_for_filter': all_teams,
|
||
# Task 4 will use this to decide +/- signs on each row.
|
||
'additive_types': list(ADDITIVE_TYPES),
|
||
# === CROSS-FILTER SOURCE: (team_id, worker_id) PAIRS ===
|
||
# Consumed by the popover JS to disable Workers checkboxes that
|
||
# aren't in any currently-URL-selected team. Raw Python list
|
||
# — |json_script in the template handles safe serialisation
|
||
# (NOT json.dumps — see the 2026-04-23 inline-filters regression).
|
||
'team_worker_pairs_json': list(
|
||
Team.workers.through.objects.values('team_id', 'worker_id').distinct()
|
||
),
|
||
})
|
||
|
||
return render(request, 'core/payroll_dashboard.html', context)
|
||
|
||
|
||
# =============================================================================
|
||
# === SINGLE PAYMENT HELPER ===
|
||
# Core payment logic used by both individual payments and batch payments.
|
||
# Locks the worker row, creates a PayrollRecord, links logs/adjustments,
|
||
# and handles loan repayment deductions — all inside an atomic transaction.
|
||
# =============================================================================
|
||
|
||
def _process_single_payment(worker_id, selected_log_ids=None, selected_adj_ids=None):
|
||
"""
|
||
Process payment for one worker inside an atomic transaction.
|
||
Returns (payroll_record, log_count, logs_amount) on success, or None if nothing to pay.
|
||
|
||
- worker_id: the Worker's PK
|
||
- selected_log_ids: list of WorkLog IDs to include (None = all unpaid)
|
||
- selected_adj_ids: list of PayrollAdjustment IDs to include (None = all pending)
|
||
"""
|
||
with transaction.atomic():
|
||
# Lock this worker's row — any concurrent request for the same
|
||
# worker will wait here until this transaction commits.
|
||
worker = Worker.objects.select_for_update().get(id=worker_id)
|
||
|
||
# Get unpaid logs, filter to selected if IDs provided
|
||
all_unpaid_logs = worker.work_logs.exclude(payroll_records__worker=worker)
|
||
if selected_log_ids:
|
||
unpaid_logs = all_unpaid_logs.filter(id__in=selected_log_ids)
|
||
else:
|
||
unpaid_logs = all_unpaid_logs
|
||
|
||
log_count = unpaid_logs.count()
|
||
logs_amount = log_count * worker.daily_rate
|
||
|
||
# Get pending adjustments, filter to selected if IDs provided
|
||
all_pending_adjs = list(worker.adjustments.filter(payroll_record__isnull=True))
|
||
if selected_adj_ids:
|
||
selected_adj_set = set(selected_adj_ids)
|
||
pending_adjs = [a for a in all_pending_adjs if a.id in selected_adj_set]
|
||
else:
|
||
pending_adjs = all_pending_adjs
|
||
|
||
# Nothing to pay — already paid or nothing owed
|
||
if log_count == 0 and not pending_adjs:
|
||
return None
|
||
|
||
# Calculate net adjustment
|
||
adj_amount = Decimal('0.00')
|
||
for adj in pending_adjs:
|
||
if adj.type in ADDITIVE_TYPES:
|
||
adj_amount += adj.amount
|
||
elif adj.type in DEDUCTIVE_TYPES:
|
||
adj_amount -= adj.amount
|
||
|
||
total_amount = logs_amount + adj_amount
|
||
|
||
# Create the PayrollRecord
|
||
payroll_record = PayrollRecord.objects.create(
|
||
worker=worker,
|
||
amount_paid=total_amount,
|
||
date=timezone.now().date(),
|
||
)
|
||
|
||
# Link work logs to this payment
|
||
payroll_record.work_logs.set(unpaid_logs)
|
||
|
||
# Link adjustments + handle loan repayments
|
||
for adj in pending_adjs:
|
||
adj.payroll_record = payroll_record
|
||
adj.save()
|
||
|
||
# If this is a loan or advance repayment, deduct from the balance
|
||
if adj.type in ('Loan Repayment', 'Advance Repayment') and adj.loan:
|
||
adj.loan.remaining_balance -= adj.amount
|
||
if adj.loan.remaining_balance <= 0:
|
||
adj.loan.remaining_balance = Decimal('0.00')
|
||
adj.loan.active = False
|
||
# === ADVANCE-TO-LOAN CONVERSION ===
|
||
# If an advance was only partially repaid, the remainder is
|
||
# now a regular loan. Change the type so it shows under
|
||
# "Loans" in the Loans tab and uses "Loan Repayment" going forward.
|
||
elif adj.type == 'Advance Repayment' and adj.loan.loan_type == 'advance':
|
||
adj.loan.loan_type = 'loan'
|
||
adj.loan.save()
|
||
|
||
return (payroll_record, log_count, logs_amount)
|
||
|
||
|
||
# =============================================================================
|
||
# === PROCESS PAYMENT ===
|
||
# HTTP endpoint for paying a single worker. Reads selected IDs from the POST
|
||
# form (split payslip), delegates to _process_single_payment, then emails.
|
||
# =============================================================================
|
||
|
||
@login_required
|
||
def process_payment(request, worker_id):
|
||
if request.method != 'POST':
|
||
return redirect('payroll_dashboard')
|
||
if not is_admin(request.user):
|
||
return HttpResponseForbidden("Not authorized.")
|
||
|
||
# Validate the worker exists (returns 404 if not found)
|
||
worker = get_object_or_404(Worker, id=worker_id)
|
||
|
||
# === SPLIT PAYSLIP SUPPORT ===
|
||
# If the POST includes specific log/adjustment IDs (from the preview
|
||
# modal's checkboxes), only pay those selected items.
|
||
# If no IDs provided (e.g., the quick "Pay" button on the table),
|
||
# fall back to paying everything — backward compatible.
|
||
selected_log_ids = [int(x) for x in request.POST.getlist('selected_log_ids') if x.isdigit()]
|
||
selected_adj_ids = [int(x) for x in request.POST.getlist('selected_adj_ids') if x.isdigit()]
|
||
|
||
result = _process_single_payment(
|
||
worker_id,
|
||
selected_log_ids=selected_log_ids or None,
|
||
selected_adj_ids=selected_adj_ids or None,
|
||
)
|
||
|
||
if result is None:
|
||
messages.warning(request, f'No pending payments for {worker.name} — already paid or nothing owed.')
|
||
return redirect('payroll_dashboard')
|
||
|
||
payroll_record, log_count, logs_amount = result
|
||
|
||
# =========================================================================
|
||
# EMAIL PAYSLIP (outside the transaction — if email fails, payment is
|
||
# still saved. We don't want a network error to roll back a real payment.)
|
||
# =========================================================================
|
||
_send_payslip_email(request, worker, payroll_record, log_count, logs_amount)
|
||
|
||
return redirect('payroll_dashboard')
|
||
|
||
|
||
# =============================================================================
|
||
# === PAYSLIP EMAIL HELPER ===
|
||
# Generates and sends a payslip (HTML email + PDF attachment).
|
||
# Used by both process_payment (regular salary) and add_adjustment (advances).
|
||
# =============================================================================
|
||
|
||
def _send_payslip_email(request, worker, payroll_record, log_count, logs_amount, suppress_messages=False):
|
||
"""
|
||
Generate and email a payslip for a completed payment.
|
||
Called after a PayrollRecord has been created and adjustments linked.
|
||
|
||
- request: Django request (for messages framework)
|
||
- worker: the Worker being paid
|
||
- payroll_record: the PayrollRecord just created
|
||
- log_count: number of work logs in this payment (0 for advance-only)
|
||
- logs_amount: total earnings from work logs (Decimal('0.00') for advance-only)
|
||
- suppress_messages: if True, skip Django messages (used by batch pay)
|
||
"""
|
||
# Lazy import — avoids crashing the app if xhtml2pdf isn't installed
|
||
from .utils import render_to_pdf
|
||
|
||
total_amount = payroll_record.amount_paid
|
||
|
||
# === DETECT STANDALONE PAYMENT (no work logs, single adjustment) ===
|
||
# Advance-only or Loan-only payments use a cleaner payslip layout
|
||
# showing just the amount instead of "0 days worked + adjustment".
|
||
advance_adj = None
|
||
loan_adj = None
|
||
if log_count == 0:
|
||
adjs_list = list(payroll_record.adjustments.all())
|
||
if len(adjs_list) == 1:
|
||
if adjs_list[0].type == 'Advance Payment':
|
||
advance_adj = adjs_list[0]
|
||
elif adjs_list[0].type == 'New Loan':
|
||
loan_adj = adjs_list[0]
|
||
|
||
is_advance = advance_adj is not None
|
||
is_loan = loan_adj is not None
|
||
if is_advance:
|
||
subject = f"Advance Payslip for {worker.name} - {payroll_record.date}"
|
||
elif is_loan:
|
||
subject = f"Loan Payslip for {worker.name} - {payroll_record.date}"
|
||
else:
|
||
subject = f"Payslip for {worker.name} - {payroll_record.date}"
|
||
|
||
# Context for both the HTML email body and the PDF attachment
|
||
email_context = {
|
||
'record': payroll_record,
|
||
'logs_count': log_count,
|
||
'logs_amount': logs_amount,
|
||
'adjustments': payroll_record.adjustments.all(),
|
||
'deductive_types': DEDUCTIVE_TYPES,
|
||
'is_advance': is_advance,
|
||
'advance_amount': advance_adj.amount if advance_adj else None,
|
||
'advance_description': advance_adj.description if advance_adj else '',
|
||
'is_loan': is_loan,
|
||
'loan_amount': loan_adj.amount if loan_adj else None,
|
||
'loan_description': loan_adj.description if loan_adj else '',
|
||
}
|
||
|
||
# 1. Render HTML email body
|
||
html_message = render_to_string('core/email/payslip_email.html', email_context)
|
||
plain_message = strip_tags(html_message)
|
||
|
||
# 2. Render PDF attachment (returns None if xhtml2pdf is not installed)
|
||
pdf_content = render_to_pdf('core/pdf/payslip_pdf.html', email_context)
|
||
|
||
# 3. Send email with PDF attached
|
||
recipient = getattr(settings, 'SPARK_RECEIPT_EMAIL', None)
|
||
if recipient:
|
||
try:
|
||
email = EmailMultiAlternatives(
|
||
subject,
|
||
plain_message,
|
||
settings.DEFAULT_FROM_EMAIL,
|
||
[recipient],
|
||
)
|
||
email.attach_alternative(html_message, "text/html")
|
||
|
||
if pdf_content:
|
||
email.attach(
|
||
f"Payslip_{worker.id}_{payroll_record.date}.pdf",
|
||
pdf_content,
|
||
'application/pdf'
|
||
)
|
||
|
||
email.send()
|
||
if not suppress_messages:
|
||
messages.success(
|
||
request,
|
||
f'Payment of R {total_amount:,.2f} processed for {worker.name}. '
|
||
f'Payslip emailed successfully.'
|
||
)
|
||
except Exception as e:
|
||
# Payment is saved — just warn that email failed
|
||
if not suppress_messages:
|
||
messages.warning(
|
||
request,
|
||
f'Payment of R {total_amount:,.2f} processed for {worker.name}, '
|
||
f'but email delivery failed: {str(e)}'
|
||
)
|
||
raise # Re-raise so batch_pay can count failures
|
||
else:
|
||
# No SPARK_RECEIPT_EMAIL configured — just show success
|
||
if not suppress_messages:
|
||
messages.success(
|
||
request,
|
||
f'Payment of R {total_amount:,.2f} processed for {worker.name}. '
|
||
f'{log_count} work log(s) marked as paid.'
|
||
)
|
||
|
||
|
||
# =============================================================================
|
||
# === BATCH PAY PREVIEW ===
|
||
# AJAX GET endpoint — dry run showing which workers would be paid and how
|
||
# much, based on their team's pay schedule. No payments are made here.
|
||
# =============================================================================
|
||
|
||
@login_required
|
||
def batch_pay_preview(request):
|
||
"""Return JSON preview of batch payment — who gets paid and how much.
|
||
Accepts ?mode=all to skip pay-period cutoff and include ALL unpaid items.
|
||
Default mode is 'schedule' which splits at last completed pay period."""
|
||
if not is_admin(request.user):
|
||
return JsonResponse({'error': 'Not authorized'}, status=403)
|
||
|
||
# === MODE: 'schedule' (default) = split at last paydate, 'all' = pay everything ===
|
||
mode = request.GET.get('mode', 'schedule')
|
||
|
||
eligible = []
|
||
skipped = []
|
||
total_amount = Decimal('0.00')
|
||
|
||
# Get all active workers with their work logs and pending adjustments
|
||
active_workers = Worker.objects.filter(active=True).prefetch_related(
|
||
Prefetch(
|
||
'work_logs',
|
||
queryset=WorkLog.objects.prefetch_related('payroll_records').select_related('project')
|
||
),
|
||
Prefetch(
|
||
'adjustments',
|
||
queryset=PayrollAdjustment.objects.filter(payroll_record__isnull=True)
|
||
),
|
||
).order_by('name')
|
||
|
||
for worker in active_workers:
|
||
team = get_worker_active_team(worker)
|
||
|
||
# --- In 'schedule' mode, skip workers without a pay schedule ---
|
||
if mode == 'schedule':
|
||
if not team or not team.pay_frequency or not team.pay_start_date:
|
||
# Check if worker has ANY unpaid items before listing as skipped
|
||
has_unpaid = False
|
||
for log in worker.work_logs.all():
|
||
paid_ids = {pr.worker_id for pr in log.payroll_records.all()}
|
||
if worker.id not in paid_ids:
|
||
has_unpaid = True
|
||
break
|
||
if not has_unpaid:
|
||
has_unpaid = worker.adjustments.filter(payroll_record__isnull=True).exists()
|
||
|
||
if has_unpaid:
|
||
skipped.append({
|
||
'worker_name': worker.name,
|
||
'reason': 'No pay schedule configured',
|
||
})
|
||
continue
|
||
|
||
# --- Determine cutoff date (if applicable) ---
|
||
cutoff_date = None
|
||
if mode == 'schedule':
|
||
# cutoff_date = end of the last COMPLETED period.
|
||
# We pay ALL overdue work (across all past periods), not just one period.
|
||
period_start, period_end = get_pay_period(team)
|
||
if not period_start:
|
||
continue
|
||
cutoff_date = period_start - datetime.timedelta(days=1)
|
||
|
||
# --- Find unpaid logs (with or without cutoff) ---
|
||
unpaid_log_ids = []
|
||
for log in worker.work_logs.all():
|
||
paid_ids = {pr.worker_id for pr in log.payroll_records.all()}
|
||
if worker.id not in paid_ids:
|
||
# In 'all' mode: no date filter. In 'schedule' mode: only up to cutoff.
|
||
if cutoff_date is None or log.date <= cutoff_date:
|
||
unpaid_log_ids.append(log.id)
|
||
|
||
# --- Find pending adjustments (with or without cutoff) ---
|
||
unpaid_adj_ids = []
|
||
adj_amount = Decimal('0.00')
|
||
for adj in worker.adjustments.all():
|
||
if cutoff_date is None or (adj.date and adj.date <= cutoff_date):
|
||
unpaid_adj_ids.append(adj.id)
|
||
if adj.type in ADDITIVE_TYPES:
|
||
adj_amount += adj.amount
|
||
elif adj.type in DEDUCTIVE_TYPES:
|
||
adj_amount -= adj.amount
|
||
|
||
# Nothing due for this worker
|
||
if not unpaid_log_ids and not unpaid_adj_ids:
|
||
continue
|
||
|
||
log_count = len(unpaid_log_ids)
|
||
logs_amount = log_count * worker.daily_rate
|
||
net = logs_amount + adj_amount
|
||
|
||
# Skip workers with zero or negative net pay
|
||
if net <= 0:
|
||
skipped.append({
|
||
'worker_name': worker.name,
|
||
'reason': f'Net pay is R {net:,.2f} (zero or negative)',
|
||
})
|
||
continue
|
||
|
||
# --- Period display text ---
|
||
if cutoff_date:
|
||
# Use day integer to avoid platform-specific strftime issues
|
||
period_display = f"Up to {cutoff_date.day} {cutoff_date.strftime('%b %Y')}"
|
||
else:
|
||
period_display = "All unpaid"
|
||
|
||
# Check if worker has any active loans or advances
|
||
has_loan = Loan.objects.filter(worker=worker, active=True).exists()
|
||
|
||
eligible.append({
|
||
'worker_id': worker.id,
|
||
'worker_name': worker.name,
|
||
'team_name': team.name if team else '—',
|
||
'period': period_display,
|
||
'days': log_count,
|
||
'logs_amount': float(logs_amount),
|
||
'adj_amount': float(adj_amount),
|
||
'net_pay': float(net),
|
||
'log_ids': unpaid_log_ids,
|
||
'adj_ids': unpaid_adj_ids,
|
||
'has_loan': has_loan,
|
||
})
|
||
total_amount += net
|
||
|
||
return JsonResponse({
|
||
'eligible': eligible,
|
||
'skipped': skipped,
|
||
'total_amount': float(total_amount),
|
||
'worker_count': len(eligible),
|
||
'mode': mode,
|
||
})
|
||
|
||
|
||
# =============================================================================
|
||
# === BATCH PAY (PROCESS) ===
|
||
# POST endpoint — processes payments for multiple workers at once.
|
||
# Each worker gets their own atomic transaction and payslip email.
|
||
# =============================================================================
|
||
|
||
@login_required
|
||
def batch_pay(request):
|
||
"""Process batch payments for multiple workers using their team pay schedules."""
|
||
if request.method != 'POST':
|
||
return redirect('payroll_dashboard')
|
||
if not is_admin(request.user):
|
||
return HttpResponseForbidden("Not authorized.")
|
||
|
||
try:
|
||
body = json.loads(request.body)
|
||
except (json.JSONDecodeError, ValueError):
|
||
messages.error(request, 'Invalid request data.')
|
||
return redirect('payroll_dashboard')
|
||
|
||
workers_to_pay = body.get('workers', [])
|
||
if not workers_to_pay:
|
||
messages.warning(request, 'No workers selected for batch payment.')
|
||
return redirect('payroll_dashboard')
|
||
|
||
# === PROCESS EACH WORKER ===
|
||
# Each worker gets their own atomic transaction (independent row locks).
|
||
# This means if one worker fails, others still succeed.
|
||
paid_count = 0
|
||
paid_total = Decimal('0.00')
|
||
errors = []
|
||
email_queue = [] # Collect payslip data for emails (sent after all payments)
|
||
|
||
for entry in workers_to_pay:
|
||
worker_id = entry.get('worker_id')
|
||
log_ids = entry.get('log_ids', [])
|
||
adj_ids = entry.get('adj_ids', [])
|
||
|
||
try:
|
||
worker = Worker.objects.get(id=worker_id, active=True)
|
||
except Worker.DoesNotExist:
|
||
errors.append(f'Worker ID {worker_id} not found or inactive.')
|
||
continue
|
||
|
||
result = _process_single_payment(
|
||
worker_id,
|
||
selected_log_ids=log_ids or None,
|
||
selected_adj_ids=adj_ids or None,
|
||
)
|
||
|
||
if result is None:
|
||
continue # Nothing to pay — silently skip
|
||
|
||
payroll_record, log_count, logs_amount = result
|
||
paid_count += 1
|
||
paid_total += payroll_record.amount_paid
|
||
email_queue.append((worker, payroll_record, log_count, logs_amount))
|
||
|
||
# === SEND PAYSLIP EMAILS (outside all transactions) ===
|
||
# If an email fails, the payment is still saved — same pattern as individual pay.
|
||
email_failures = 0
|
||
for worker, pr, lc, la in email_queue:
|
||
try:
|
||
_send_payslip_email(request, worker, pr, lc, la, suppress_messages=True)
|
||
except Exception:
|
||
email_failures += 1
|
||
|
||
# === SUMMARY MESSAGE ===
|
||
if paid_count > 0:
|
||
msg = f'Batch payment complete: {paid_count} worker(s) paid, total R {paid_total:,.2f}.'
|
||
if email_failures:
|
||
msg += f' ({email_failures} email(s) failed to send.)'
|
||
messages.success(request, msg)
|
||
|
||
for err in errors:
|
||
messages.warning(request, err)
|
||
|
||
if paid_count == 0 and not errors:
|
||
messages.info(request, 'No payments were processed — all workers already paid or had zero/negative net pay.')
|
||
|
||
return redirect('payroll_dashboard')
|
||
|
||
|
||
# =============================================================================
|
||
# === PRICE OVERTIME ===
|
||
# Creates Overtime adjustments for workers who have unpriced overtime on
|
||
# their work logs. Called via AJAX from the Price Overtime modal.
|
||
# =============================================================================
|
||
|
||
@login_required
|
||
def price_overtime(request):
|
||
if request.method != 'POST':
|
||
return redirect('payroll_dashboard')
|
||
if not is_admin(request.user):
|
||
return HttpResponseForbidden("Not authorized.")
|
||
|
||
log_ids = request.POST.getlist('log_id[]')
|
||
worker_ids = request.POST.getlist('worker_id[]')
|
||
rate_pcts = request.POST.getlist('rate_pct[]')
|
||
|
||
created_count = 0
|
||
for log_id, w_id, pct in zip(log_ids, worker_ids, rate_pcts):
|
||
try:
|
||
worklog = WorkLog.objects.select_related('project').get(id=int(log_id))
|
||
worker = Worker.objects.get(id=int(w_id))
|
||
rate_pct = Decimal(pct)
|
||
|
||
# Calculate: daily_rate × overtime_fraction × (rate_percentage / 100)
|
||
amount = worker.daily_rate * worklog.overtime_amount * (rate_pct / Decimal('100'))
|
||
|
||
if amount > 0:
|
||
PayrollAdjustment.objects.create(
|
||
worker=worker,
|
||
type='Overtime',
|
||
amount=amount,
|
||
date=worklog.date,
|
||
description=f'Overtime ({worklog.get_overtime_amount_display()}) at {pct}% on {worklog.project.name}',
|
||
work_log=worklog,
|
||
project=worklog.project,
|
||
)
|
||
# Mark this worker as "priced" for this log's overtime
|
||
worklog.priced_workers.add(worker)
|
||
created_count += 1
|
||
except (WorkLog.DoesNotExist, Worker.DoesNotExist, Exception):
|
||
continue
|
||
|
||
messages.success(request, f'Priced {created_count} overtime adjustment(s).')
|
||
return redirect('payroll_dashboard')
|
||
|
||
|
||
# =============================================================================
|
||
# === ADD ADJUSTMENT ===
|
||
# Creates a new payroll adjustment (bonus, deduction, loan, etc.).
|
||
# Called via POST from the Add Adjustment modal.
|
||
# =============================================================================
|
||
|
||
@login_required
|
||
def add_adjustment(request):
|
||
if request.method != 'POST':
|
||
return redirect('payroll_dashboard')
|
||
if not is_admin(request.user):
|
||
return HttpResponseForbidden("Not authorized.")
|
||
|
||
worker_ids = request.POST.getlist('workers')
|
||
adj_type = request.POST.get('type', '')
|
||
amount_str = request.POST.get('amount', '0')
|
||
description = request.POST.get('description', '')
|
||
date_str = request.POST.get('date', '')
|
||
project_id = request.POST.get('project', '')
|
||
|
||
# Validate workers — at least one must be selected.
|
||
# The frontend also checks this, but this is a safety net in case
|
||
# the user has JavaScript disabled or submits via other means.
|
||
if not worker_ids:
|
||
messages.error(request, 'Please select at least one worker.')
|
||
return redirect('payroll_dashboard')
|
||
|
||
# Validate amount
|
||
try:
|
||
amount = Decimal(amount_str)
|
||
if amount <= 0:
|
||
raise ValueError
|
||
except (ValueError, Exception):
|
||
messages.error(request, 'Please enter a valid amount greater than zero.')
|
||
return redirect('payroll_dashboard')
|
||
|
||
# Validate date
|
||
try:
|
||
adj_date = datetime.datetime.strptime(date_str, '%Y-%m-%d').date() if date_str else timezone.now().date()
|
||
except ValueError:
|
||
adj_date = timezone.now().date()
|
||
|
||
# Validate project for types that require it
|
||
project = None
|
||
if project_id:
|
||
try:
|
||
project = Project.objects.get(id=int(project_id))
|
||
except Project.DoesNotExist:
|
||
pass
|
||
|
||
project_required_types = ('Overtime', 'Bonus', 'Deduction', 'Advance Payment')
|
||
if adj_type in project_required_types and not project:
|
||
messages.error(request, 'A project must be selected for this adjustment type.')
|
||
return redirect('payroll_dashboard')
|
||
|
||
created_count = 0
|
||
for w_id in worker_ids:
|
||
try:
|
||
worker = Worker.objects.get(id=int(w_id))
|
||
except Worker.DoesNotExist:
|
||
continue
|
||
|
||
loan = None
|
||
|
||
# === LOAN REPAYMENT — find the worker's active loan ===
|
||
if adj_type == 'Loan Repayment':
|
||
loan = worker.loans.filter(active=True, loan_type='loan').first()
|
||
if not loan:
|
||
messages.warning(request, f'{worker.name} has no active loan — skipped.')
|
||
continue
|
||
|
||
# === ADVANCE REPAYMENT — find the worker's active advance ===
|
||
if adj_type == 'Advance Repayment':
|
||
loan = worker.loans.filter(active=True, loan_type='advance').first()
|
||
if not loan:
|
||
messages.warning(request, f'{worker.name} has no active advance — skipped.')
|
||
continue
|
||
|
||
# === NEW LOAN — create a Loan record (loan_type='loan') ===
|
||
# If "Pay Immediately" is checked (default), the loan is processed
|
||
# right away — PayrollRecord is created, payslip emailed to Spark,
|
||
# and the adjustment is marked as paid. If unchecked, the loan sits
|
||
# in Pending Payments and is included in the next pay cycle.
|
||
if adj_type == 'New Loan':
|
||
loan = Loan.objects.create(
|
||
worker=worker,
|
||
loan_type='loan',
|
||
principal_amount=amount,
|
||
remaining_balance=amount,
|
||
date=adj_date,
|
||
reason=description,
|
||
)
|
||
|
||
pay_immediately = request.POST.get('pay_immediately') == '1'
|
||
if pay_immediately:
|
||
# Create the adjustment and immediately mark it as paid
|
||
loan_adj = PayrollAdjustment.objects.create(
|
||
worker=worker,
|
||
type='New Loan',
|
||
amount=amount,
|
||
date=adj_date,
|
||
description=description,
|
||
loan=loan,
|
||
)
|
||
payroll_record = PayrollRecord.objects.create(
|
||
worker=worker,
|
||
amount_paid=amount,
|
||
date=adj_date,
|
||
)
|
||
loan_adj.payroll_record = payroll_record
|
||
loan_adj.save()
|
||
|
||
# Send payslip email to Spark
|
||
_send_payslip_email(request, worker, payroll_record, 0, Decimal('0.00'))
|
||
created_count += 1
|
||
continue # Skip the generic PayrollAdjustment creation below
|
||
|
||
# === ADVANCE PAYMENT — immediate payment + auto-repayment ===
|
||
# An advance is a salary prepayment — worker gets money now, and
|
||
# the full amount is automatically deducted from their next salary.
|
||
# Unlike other adjustments, advances are processed IMMEDIATELY
|
||
# (they don't sit in Pending Payments waiting for a "Pay" click).
|
||
if adj_type == 'Advance Payment':
|
||
# VALIDATION: Worker must have unpaid work to justify an advance.
|
||
# If they have no logged work, this is a loan, not an advance.
|
||
has_unpaid_logs = False
|
||
for log in worker.work_logs.all():
|
||
paid_worker_ids = set(
|
||
log.payroll_records.values_list('worker_id', flat=True)
|
||
)
|
||
if worker.id not in paid_worker_ids:
|
||
has_unpaid_logs = True
|
||
break
|
||
|
||
if not has_unpaid_logs:
|
||
messages.warning(
|
||
request,
|
||
f'{worker.name} has no unpaid work days — cannot create '
|
||
f'an advance. Use "New Loan" instead.'
|
||
)
|
||
continue
|
||
|
||
# 1. Create the Loan record (tracks the advance balance)
|
||
loan = Loan.objects.create(
|
||
worker=worker,
|
||
loan_type='advance',
|
||
principal_amount=amount,
|
||
remaining_balance=amount,
|
||
date=adj_date,
|
||
reason=description or 'Salary advance',
|
||
)
|
||
|
||
# 2. Create the Advance Payment adjustment
|
||
advance_adj = PayrollAdjustment.objects.create(
|
||
worker=worker,
|
||
type='Advance Payment',
|
||
amount=amount,
|
||
date=adj_date,
|
||
description=description,
|
||
project=project,
|
||
loan=loan,
|
||
)
|
||
|
||
# 3. AUTO-PROCESS: Create PayrollRecord immediately
|
||
# (advance is paid now, not at the next payday)
|
||
payroll_record = PayrollRecord.objects.create(
|
||
worker=worker,
|
||
amount_paid=amount,
|
||
date=adj_date,
|
||
)
|
||
advance_adj.payroll_record = payroll_record
|
||
advance_adj.save()
|
||
|
||
# 4. AUTO-CREATE REPAYMENT for the next salary cycle
|
||
# This ensures the advance is automatically deducted from
|
||
# the worker's next salary without the admin having to remember.
|
||
PayrollAdjustment.objects.create(
|
||
worker=worker,
|
||
type='Advance Repayment',
|
||
amount=amount,
|
||
date=adj_date,
|
||
description=f'Auto-deduction for advance of R {amount:.2f}',
|
||
loan=loan,
|
||
project=project,
|
||
)
|
||
|
||
# 5. Send payslip email to SparkReceipt
|
||
_send_payslip_email(request, worker, payroll_record, 0, Decimal('0.00'))
|
||
created_count += 1
|
||
continue # Skip the generic PayrollAdjustment creation below
|
||
|
||
# === ALL OTHER TYPES — create a pending adjustment ===
|
||
PayrollAdjustment.objects.create(
|
||
worker=worker,
|
||
type=adj_type,
|
||
amount=amount,
|
||
date=adj_date,
|
||
description=description,
|
||
project=project,
|
||
loan=loan,
|
||
)
|
||
created_count += 1
|
||
|
||
messages.success(request, f'Created {created_count} {adj_type} adjustment(s).')
|
||
return redirect('payroll_dashboard')
|
||
|
||
|
||
# =============================================================================
|
||
# === EDIT ADJUSTMENT ===
|
||
# Updates an existing unpaid adjustment. Type changes are limited to
|
||
# Bonus ↔ Deduction swaps only.
|
||
# =============================================================================
|
||
|
||
@login_required
|
||
def edit_adjustment(request, adj_id):
|
||
if request.method != 'POST':
|
||
return redirect('payroll_dashboard')
|
||
if not is_admin(request.user):
|
||
return HttpResponseForbidden("Not authorized.")
|
||
|
||
adj = get_object_or_404(PayrollAdjustment, id=adj_id)
|
||
|
||
# Can't edit adjustments that have already been paid
|
||
if adj.payroll_record is not None:
|
||
messages.error(request, 'Cannot edit a paid adjustment.')
|
||
return redirect('payroll_dashboard')
|
||
|
||
# Can't edit Loan Repayment adjustments (managed by the loan system).
|
||
# Advance Repayments CAN be edited — the admin may want to reduce the
|
||
# auto-deduction amount (e.g., deduct R50 of a R100 advance this payday).
|
||
if adj.type == 'Loan Repayment':
|
||
messages.warning(request, 'Loan repayment adjustments cannot be edited directly.')
|
||
return redirect('payroll_dashboard')
|
||
|
||
# Update fields
|
||
try:
|
||
adj.amount = Decimal(request.POST.get('amount', str(adj.amount)))
|
||
except (ValueError, Exception):
|
||
pass
|
||
|
||
adj.description = request.POST.get('description', adj.description)
|
||
|
||
date_str = request.POST.get('date', '')
|
||
if date_str:
|
||
try:
|
||
adj.date = datetime.datetime.strptime(date_str, '%Y-%m-%d').date()
|
||
except ValueError:
|
||
pass
|
||
|
||
# Type change — only allow Bonus ↔ Deduction
|
||
new_type = request.POST.get('type', adj.type)
|
||
if adj.type in ('Bonus', 'Deduction') and new_type in ('Bonus', 'Deduction'):
|
||
adj.type = new_type
|
||
|
||
# Project
|
||
project_id = request.POST.get('project', '')
|
||
if project_id:
|
||
try:
|
||
adj.project = Project.objects.get(id=int(project_id))
|
||
except Project.DoesNotExist:
|
||
pass
|
||
else:
|
||
adj.project = None
|
||
|
||
# === ADVANCE REPAYMENT EDIT — cap amount at loan balance ===
|
||
# If the admin edits an auto-created advance repayment, make sure
|
||
# the amount doesn't exceed the loan's remaining balance.
|
||
if adj.type == 'Advance Repayment' and adj.loan:
|
||
if adj.amount > adj.loan.remaining_balance:
|
||
adj.amount = adj.loan.remaining_balance
|
||
messages.info(
|
||
request,
|
||
f'Amount capped at loan balance of R {adj.loan.remaining_balance:.2f}.'
|
||
)
|
||
|
||
adj.save()
|
||
|
||
# If it's a Loan or Advance adjustment, sync the loan details
|
||
if adj.type in ('New Loan', 'Advance Payment') and adj.loan:
|
||
adj.loan.principal_amount = adj.amount
|
||
adj.loan.remaining_balance = adj.amount
|
||
adj.loan.reason = adj.description
|
||
adj.loan.save()
|
||
|
||
messages.success(request, f'{adj.type} adjustment updated.')
|
||
return redirect('payroll_dashboard')
|
||
|
||
|
||
# =============================================================================
|
||
# === DELETE ADJUSTMENT ===
|
||
# Removes an unpaid adjustment. Handles cascade logic for Loans and Overtime.
|
||
# =============================================================================
|
||
|
||
# =============================================================================
|
||
# === ADJUSTMENT CASCADE DELETE HELPER ===
|
||
# Shared by delete_adjustment (single row) and bulk_delete_adjustments (many
|
||
# rows) so both paths have identical semantics. "New Loan" and "Advance
|
||
# Payment" each own a linked Loan row that needs teardown; "Overtime" needs
|
||
# its worker un-priced from the WorkLog. Without this helper, bulk-delete
|
||
# would orphan Loan rows and leave priced_workers stale.
|
||
# =============================================================================
|
||
|
||
|
||
def _delete_adjustment_with_cascade(adj):
|
||
"""Delete one PayrollAdjustment, cascading through its linked objects.
|
||
|
||
Returns a tuple `(ok: bool, reason: str or None)`:
|
||
- `(True, None)` — row deleted successfully (with any cascades done)
|
||
- `(False, 'paid')` — adjustment already paid; refuse
|
||
- `(False, 'has_paid_repayments')` — linked Loan has paid repayments;
|
||
deleting it would lose the repayment audit trail
|
||
|
||
Cascade rules:
|
||
- New Loan / Advance Payment: delete the linked `Loan` row plus any
|
||
still-unpaid repayment adjustments. If ANY repayment has already
|
||
been paid, abort (otherwise we'd lose history of money that
|
||
already moved).
|
||
- Overtime: remove the worker from work_log.priced_workers so the
|
||
overtime can be re-priced cleanly later.
|
||
- Other types: plain delete, no cascade.
|
||
"""
|
||
if adj.payroll_record is not None:
|
||
return False, 'paid'
|
||
|
||
adj_type = adj.type
|
||
|
||
if adj_type in ('New Loan', 'Advance Payment') and adj.loan:
|
||
repayment_type = 'Advance Repayment' if adj_type == 'Advance Payment' else 'Loan Repayment'
|
||
paid_repayments = PayrollAdjustment.objects.filter(
|
||
loan=adj.loan,
|
||
type=repayment_type,
|
||
payroll_record__isnull=False,
|
||
)
|
||
if paid_repayments.exists():
|
||
return False, 'has_paid_repayments'
|
||
# Delete all still-unpaid repayments, then the Loan itself
|
||
PayrollAdjustment.objects.filter(
|
||
loan=adj.loan,
|
||
type=repayment_type,
|
||
payroll_record__isnull=True,
|
||
).delete()
|
||
adj.loan.delete()
|
||
elif adj_type == 'Overtime' and adj.work_log:
|
||
# "Un-price" the overtime — worker can be re-priced cleanly later
|
||
adj.work_log.priced_workers.remove(adj.worker)
|
||
|
||
adj.delete()
|
||
return True, None
|
||
|
||
|
||
@login_required
|
||
def delete_adjustment(request, adj_id):
|
||
if request.method != 'POST':
|
||
return redirect('payroll_dashboard')
|
||
if not is_admin(request.user):
|
||
return HttpResponseForbidden("Not authorized.")
|
||
|
||
adj = get_object_or_404(PayrollAdjustment, id=adj_id)
|
||
adj_type = adj.type
|
||
worker_name = adj.worker.name
|
||
|
||
ok, reason = _delete_adjustment_with_cascade(adj)
|
||
if not ok:
|
||
if reason == 'paid':
|
||
messages.error(request, 'Cannot delete a paid adjustment.')
|
||
elif reason == 'has_paid_repayments':
|
||
label = 'advance' if adj_type == 'Advance Payment' else 'loan'
|
||
messages.error(
|
||
request,
|
||
f'Cannot delete {label} for {worker_name} — it has paid repayments.'
|
||
)
|
||
return redirect('payroll_dashboard')
|
||
|
||
messages.success(request, f'{adj_type} adjustment for {worker_name} deleted.')
|
||
return redirect('payroll_dashboard')
|
||
|
||
|
||
# =============================================================================
|
||
# === BULK DELETE ADJUSTMENTS (Adjustments tab) ===
|
||
# POST /payroll/adjustments/bulk-delete/ with adjustment_ids[] body.
|
||
# Only unpaid adjustments are deleted; paid rows survive because payroll
|
||
# history is immutable (matches the existing edit_adjustment view, which
|
||
# also refuses to edit paid rows).
|
||
# =============================================================================
|
||
|
||
|
||
@login_required
|
||
@require_POST
|
||
def bulk_delete_adjustments(request):
|
||
"""Delete multiple unpaid PayrollAdjustment rows with full cascade.
|
||
|
||
Body (form-encoded): `adjustment_ids` — repeated once per ID.
|
||
Returns JSON: `{"deleted": N, "requested": M, "skipped_reasons": {...}}`.
|
||
Admin-only; supervisors get 403. POST-only; anything else gets 405
|
||
from @require_POST.
|
||
|
||
Cascade: each row is deleted via `_delete_adjustment_with_cascade`
|
||
(shared with the single-row `delete_adjustment` view) so bulk and
|
||
single-row have identical semantics. Rows that fail (already paid,
|
||
or a linked loan with paid repayments) are counted in `skipped_reasons`
|
||
but don't block the rest of the batch.
|
||
"""
|
||
if not is_admin(request.user):
|
||
return JsonResponse({'error': 'Admin access required'}, status=403)
|
||
|
||
# Int-coerce and drop non-digit values (defensive against garbled input —
|
||
# ids are client-generated so any non-digit would crash the queryset).
|
||
raw_ids = request.POST.getlist('adjustment_ids')
|
||
ids = [int(v) for v in raw_ids if v.strip().isdigit()]
|
||
|
||
# Fetch each adjustment individually — we need the cascade helper to
|
||
# operate per-row (it deletes the linked Loan / unprices Overtime).
|
||
# Pre-filtering for .payroll_record__isnull=True is fine as an upfront
|
||
# short-circuit but the helper double-checks anyway.
|
||
adjustments = list(PayrollAdjustment.objects.filter(
|
||
id__in=ids,
|
||
payroll_record__isnull=True,
|
||
).select_related('loan', 'work_log', 'worker'))
|
||
|
||
deleted = 0
|
||
skipped_reasons = {}
|
||
for adj in adjustments:
|
||
ok, reason = _delete_adjustment_with_cascade(adj)
|
||
if ok:
|
||
deleted += 1
|
||
else:
|
||
skipped_reasons[reason] = skipped_reasons.get(reason, 0) + 1
|
||
|
||
return JsonResponse({
|
||
'deleted': deleted,
|
||
'requested': len(ids),
|
||
'skipped_reasons': skipped_reasons,
|
||
})
|
||
|
||
|
||
# =============================================================================
|
||
# === PREVIEW PAYSLIP (AJAX) ===
|
||
# Returns a JSON preview of what a worker's payslip would look like.
|
||
# Called from the Preview Payslip modal without saving anything.
|
||
# =============================================================================
|
||
|
||
@login_required
|
||
def preview_payslip(request, worker_id):
|
||
if not is_admin(request.user):
|
||
return JsonResponse({'error': 'Not authorized'}, status=403)
|
||
|
||
worker = get_object_or_404(Worker, id=worker_id)
|
||
|
||
# Find unpaid logs — include the log ID so the frontend can send
|
||
# selected IDs back for split payslip (selective payment).
|
||
unpaid_logs = []
|
||
for log in worker.work_logs.select_related('project').prefetch_related('payroll_records').all():
|
||
paid_worker_ids = {pr.worker_id for pr in log.payroll_records.all()}
|
||
if worker.id not in paid_worker_ids:
|
||
unpaid_logs.append({
|
||
'id': log.id,
|
||
'date': log.date.strftime('%Y-%m-%d'),
|
||
'project': log.project.name,
|
||
})
|
||
|
||
# Sort logs by date so the split makes visual sense (oldest first)
|
||
unpaid_logs.sort(key=lambda x: x['date'])
|
||
|
||
log_count = len(unpaid_logs)
|
||
log_amount = float(log_count * worker.daily_rate)
|
||
|
||
# Find pending adjustments — include ID and date for split payslip
|
||
pending_adjs = worker.adjustments.filter(
|
||
payroll_record__isnull=True
|
||
).select_related('project')
|
||
|
||
adjustments_list = []
|
||
adj_total = 0.0
|
||
for adj in pending_adjs:
|
||
sign = '+' if adj.type in ADDITIVE_TYPES else '-'
|
||
adj_total += float(adj.amount) if adj.type in ADDITIVE_TYPES else -float(adj.amount)
|
||
adjustments_list.append({
|
||
'id': adj.id,
|
||
# 'type' keeps the raw DB value so any JS that uses it as an
|
||
# identifier keeps working; 'type_label' is the short display
|
||
# label ('Loan' / 'Advance' / 'Advance Repaid' etc.) for visible UI.
|
||
'type': adj.type,
|
||
'type_label': adj.get_type_display(),
|
||
'amount': float(adj.amount),
|
||
'sign': sign,
|
||
'description': adj.description,
|
||
'project': adj.project.name if adj.project else '',
|
||
'date': adj.date.strftime('%Y-%m-%d'),
|
||
})
|
||
|
||
# === ACTIVE LOANS & ADVANCES ===
|
||
# Include the worker's outstanding balances so the admin can see the
|
||
# full picture and add repayments directly from the preview modal.
|
||
active_loans = worker.loans.filter(active=True).order_by('-date')
|
||
loans_list = []
|
||
for loan in active_loans:
|
||
loans_list.append({
|
||
'id': loan.id,
|
||
'type': loan.loan_type, # 'loan' or 'advance'
|
||
'type_label': loan.get_loan_type_display(), # 'Loan' or 'Advance'
|
||
'principal': float(loan.principal_amount),
|
||
'balance': float(loan.remaining_balance),
|
||
'date': loan.date.strftime('%Y-%m-%d'),
|
||
'reason': loan.reason or '',
|
||
})
|
||
|
||
# === PAY PERIOD INFO ===
|
||
# If the worker belongs to a team with a pay schedule, include the
|
||
# current period boundaries so the "Split at Pay Date" button can work.
|
||
team = get_worker_active_team(worker)
|
||
period_start, period_end = get_pay_period(team)
|
||
|
||
# cutoff_date = last day of the most recently COMPLETED pay period.
|
||
# All unpaid logs on or before this date are "due" for payment.
|
||
# E.g., fortnightly periods ending Mar 14, Mar 28, Apr 11...
|
||
# If today is Mar 20, cutoff_date = Mar 14 (pay everything through last completed period).
|
||
cutoff_date = (period_start - datetime.timedelta(days=1)) if period_start else None
|
||
|
||
pay_period = {
|
||
'has_schedule': period_start is not None,
|
||
'start': period_start.strftime('%Y-%m-%d') if period_start else None,
|
||
'end': period_end.strftime('%Y-%m-%d') if period_end else None,
|
||
'cutoff_date': cutoff_date.strftime('%Y-%m-%d') if cutoff_date else None,
|
||
'frequency': team.pay_frequency if team else None,
|
||
'team_name': team.name if team else None,
|
||
}
|
||
|
||
return JsonResponse({
|
||
'worker_id': worker.id,
|
||
'worker_name': worker.name,
|
||
'worker_id_number': worker.id_number,
|
||
'day_rate': float(worker.daily_rate),
|
||
'days_worked': log_count,
|
||
'log_amount': log_amount,
|
||
'adjustments': adjustments_list,
|
||
'adj_total': adj_total,
|
||
'net_pay': log_amount + adj_total,
|
||
'logs': unpaid_logs,
|
||
'active_loans': loans_list,
|
||
'pay_period': pay_period,
|
||
})
|
||
|
||
|
||
# =============================================================================
|
||
# === WORKER LOOKUP (AJAX) ===
|
||
# Returns a comprehensive financial report card for a single worker.
|
||
# Called via AJAX GET from the Worker Lookup modal on the payroll dashboard.
|
||
# Shows: amount payable, outstanding loans, recent payments, active loans,
|
||
# current project, PPE sizing, drivers license, and notes.
|
||
# =============================================================================
|
||
|
||
@login_required
|
||
def worker_lookup_ajax(request, worker_id):
|
||
"""AJAX endpoint — returns a comprehensive financial report card for a worker."""
|
||
if not is_admin(request.user):
|
||
return JsonResponse({'error': 'Not authorized'}, status=403)
|
||
|
||
worker = get_object_or_404(Worker, id=worker_id)
|
||
today = timezone.now().date()
|
||
|
||
# === AMOUNT PAYABLE ===
|
||
# Same logic as preview_payslip: find unpaid work logs for this worker
|
||
# A log is "unpaid" if no PayrollRecord links both this log and this worker
|
||
unpaid_log_count = 0
|
||
for log in worker.work_logs.prefetch_related('payroll_records').all():
|
||
paid_worker_ids = {pr.worker_id for pr in log.payroll_records.all()}
|
||
if worker.id not in paid_worker_ids:
|
||
unpaid_log_count += 1
|
||
|
||
log_amount = float(unpaid_log_count * worker.daily_rate)
|
||
|
||
# Net adjustment total: additive types increase pay, deductive types decrease it
|
||
pending_adjs = worker.adjustments.filter(payroll_record__isnull=True)
|
||
adj_total = 0.0
|
||
for adj in pending_adjs:
|
||
if adj.type in ADDITIVE_TYPES:
|
||
adj_total += float(adj.amount)
|
||
elif adj.type in DEDUCTIVE_TYPES:
|
||
adj_total -= float(adj.amount)
|
||
|
||
amount_payable = log_amount + adj_total
|
||
|
||
# === OUTSTANDING LOANS ===
|
||
# Total remaining balance across all active loans and advances
|
||
active_loans = worker.loans.filter(active=True).order_by('-date')
|
||
outstanding_loans = float(
|
||
active_loans.aggregate(total=Sum('remaining_balance'))['total'] or 0
|
||
)
|
||
|
||
# === PAID THIS MONTH ===
|
||
# Sum of all PayrollRecord amounts in the current calendar month
|
||
paid_this_month = float(PayrollRecord.objects.filter(
|
||
worker=worker, date__year=today.year, date__month=today.month
|
||
).aggregate(total=Sum('amount_paid'))['total'] or 0)
|
||
|
||
# === LOANS THIS YEAR ===
|
||
# Total principal of all loans issued to this worker in the current year
|
||
loans_this_year = float(Loan.objects.filter(
|
||
worker=worker, date__year=today.year
|
||
).aggregate(total=Sum('principal_amount'))['total'] or 0)
|
||
|
||
# === PAID THIS YEAR ===
|
||
# Sum of all PayrollRecord amounts in the current year
|
||
paid_this_year = float(PayrollRecord.objects.filter(
|
||
worker=worker, date__year=today.year
|
||
).aggregate(total=Sum('amount_paid'))['total'] or 0)
|
||
|
||
# === RECENT ACTIVITY ===
|
||
# Most recent of each type — used to show "last payslip", "last loan", etc.
|
||
last_payslip = PayrollRecord.objects.filter(
|
||
worker=worker).order_by('-date').first()
|
||
|
||
last_loan = Loan.objects.filter(
|
||
worker=worker).order_by('-date').first()
|
||
|
||
last_repayment = PayrollAdjustment.objects.filter(
|
||
worker=worker, type='Loan Repayment',
|
||
payroll_record__isnull=False).order_by('-date').first()
|
||
|
||
last_advance = PayrollAdjustment.objects.filter(
|
||
worker=worker, type='Advance Payment',
|
||
payroll_record__isnull=False).order_by('-date').first()
|
||
|
||
# === CURRENT PROJECT ===
|
||
# The project from the worker's most recent work log, plus how many
|
||
# days they've worked on that project in total
|
||
latest_log = worker.work_logs.select_related('project').order_by('-date').first()
|
||
current_project = None
|
||
days_on_project = 0
|
||
if latest_log and latest_log.project:
|
||
current_project = latest_log.project.name
|
||
days_on_project = worker.work_logs.filter(project=latest_log.project).count()
|
||
|
||
# === TEAM ===
|
||
team = get_worker_active_team(worker)
|
||
|
||
# === ACTIVE LOANS LIST ===
|
||
# Full details for the loans table in the modal
|
||
loans_list = []
|
||
for loan in active_loans:
|
||
loans_list.append({
|
||
'type': loan.get_loan_type_display(),
|
||
'principal': float(loan.principal_amount),
|
||
'balance': float(loan.remaining_balance),
|
||
'date': loan.date.strftime('%Y-%m-%d'),
|
||
'reason': loan.reason or '',
|
||
})
|
||
|
||
return JsonResponse({
|
||
# Identity
|
||
'worker_id': worker.id,
|
||
'name': worker.name,
|
||
'id_number': worker.id_number,
|
||
'phone': worker.phone_number,
|
||
'employment_date': worker.employment_date.strftime('%Y-%m-%d') if worker.employment_date else '',
|
||
'team': team.name if team else '',
|
||
'current_project': current_project or '',
|
||
'days_on_project': days_on_project,
|
||
|
||
# Quick stats (4 cards)
|
||
'amount_payable': amount_payable,
|
||
'outstanding_loans': outstanding_loans,
|
||
'paid_this_month': paid_this_month,
|
||
'loans_this_year': loans_this_year,
|
||
'paid_this_year': paid_this_year,
|
||
|
||
# Recent activity
|
||
'last_payslip': {
|
||
'date': last_payslip.date.strftime('%Y-%m-%d'),
|
||
'amount': float(last_payslip.amount_paid),
|
||
} if last_payslip else None,
|
||
'last_loan': {
|
||
'date': last_loan.date.strftime('%Y-%m-%d'),
|
||
'amount': float(last_loan.principal_amount),
|
||
'reason': last_loan.reason or '',
|
||
} if last_loan else None,
|
||
'last_repayment': {
|
||
'date': last_repayment.date.strftime('%Y-%m-%d'),
|
||
'amount': float(last_repayment.amount),
|
||
} if last_repayment else None,
|
||
'last_advance': {
|
||
'date': last_advance.date.strftime('%Y-%m-%d'),
|
||
'amount': float(last_advance.amount),
|
||
} if last_advance else None,
|
||
|
||
# Active loans table
|
||
'active_loans': loans_list,
|
||
|
||
# Sizing & info
|
||
'shoe_size': worker.shoe_size,
|
||
'overall_top_size': worker.overall_top_size,
|
||
'pants_size': worker.pants_size,
|
||
'tshirt_size': worker.tshirt_size,
|
||
'has_drivers_license': worker.has_drivers_license,
|
||
'notes': worker.notes,
|
||
})
|
||
|
||
|
||
# =============================================================================
|
||
# === ADD REPAYMENT (AJAX) ===
|
||
# Creates a Loan Repayment or Advance Repayment adjustment for a single worker.
|
||
# Called via AJAX POST from the Payslip Preview modal's inline repayment form.
|
||
# Returns JSON so the modal can refresh in-place without a page reload.
|
||
# =============================================================================
|
||
|
||
@login_required
|
||
def add_repayment_ajax(request, worker_id):
|
||
"""AJAX endpoint: add a repayment adjustment and return JSON response."""
|
||
if request.method != 'POST':
|
||
return JsonResponse({'error': 'POST required'}, status=405)
|
||
if not is_admin(request.user):
|
||
return JsonResponse({'error': 'Not authorized'}, status=403)
|
||
|
||
worker = get_object_or_404(Worker, id=worker_id)
|
||
|
||
# Parse the POST body (sent as JSON from fetch())
|
||
try:
|
||
body = json.loads(request.body)
|
||
except json.JSONDecodeError:
|
||
return JsonResponse({'error': 'Invalid JSON'}, status=400)
|
||
|
||
loan_id = body.get('loan_id')
|
||
amount_str = body.get('amount', '0')
|
||
description = body.get('description', '')
|
||
|
||
# Validate: loan exists, belongs to this worker, and is active
|
||
try:
|
||
loan = Loan.objects.get(id=int(loan_id), worker=worker, active=True)
|
||
except (Loan.DoesNotExist, ValueError, TypeError):
|
||
return JsonResponse({'error': 'No active loan/advance found.'}, status=400)
|
||
|
||
# Validate: amount is positive
|
||
try:
|
||
amount = Decimal(str(amount_str))
|
||
if amount <= 0:
|
||
raise ValueError
|
||
except (ValueError, Exception):
|
||
return JsonResponse({'error': 'Please enter a valid amount greater than zero.'}, status=400)
|
||
|
||
# Cap the repayment at the remaining balance (prevent over-repaying)
|
||
if amount > loan.remaining_balance:
|
||
amount = loan.remaining_balance
|
||
|
||
# Pick the right repayment type based on loan type
|
||
repayment_type = 'Advance Repayment' if loan.loan_type == 'advance' else 'Loan Repayment'
|
||
|
||
# Create the adjustment (balance deduction happens later during process_payment)
|
||
PayrollAdjustment.objects.create(
|
||
worker=worker,
|
||
type=repayment_type,
|
||
amount=amount,
|
||
date=timezone.now().date(),
|
||
description=description or f'{loan.get_loan_type_display()} repayment',
|
||
loan=loan,
|
||
)
|
||
|
||
return JsonResponse({
|
||
'success': True,
|
||
'message': f'{repayment_type} of R {amount:.2f} added for {worker.name}.',
|
||
})
|
||
|
||
|
||
# =============================================================================
|
||
# === PAYSLIP DETAIL ===
|
||
# Shows a completed payment (PayrollRecord) as a printable payslip page.
|
||
# Displays: worker details, work log table, adjustments table, totals.
|
||
# Reached from the "Payment History" tab on the payroll dashboard.
|
||
# =============================================================================
|
||
|
||
@login_required
|
||
def payslip_detail(request, pk):
|
||
"""Show a completed payslip with work logs, adjustments, and totals."""
|
||
if not is_admin(request.user):
|
||
return redirect('payroll_dashboard')
|
||
|
||
record = get_object_or_404(PayrollRecord, pk=pk)
|
||
|
||
# Get the work logs included in this payment
|
||
logs = record.work_logs.select_related('project').order_by('date')
|
||
|
||
# Get the adjustments linked to this payment
|
||
adjustments = record.adjustments.all().order_by('type')
|
||
|
||
# Calculate base pay from logs
|
||
# Each log = 1 day of work at the worker's daily rate
|
||
base_pay = record.worker.daily_rate * logs.count()
|
||
|
||
# Calculate net adjustment amount (additive minus deductive)
|
||
adjustments_net = record.amount_paid - base_pay
|
||
|
||
# === DETECT STANDALONE PAYMENT (no work logs, single adjustment) ===
|
||
# Advance-only or Loan-only payments use a cleaner layout.
|
||
adjs_list = list(adjustments)
|
||
advance_adj = None
|
||
loan_adj = None
|
||
if logs.count() == 0 and len(adjs_list) == 1:
|
||
if adjs_list[0].type == 'Advance Payment':
|
||
advance_adj = adjs_list[0]
|
||
elif adjs_list[0].type == 'New Loan':
|
||
loan_adj = adjs_list[0]
|
||
|
||
context = {
|
||
'record': record,
|
||
'logs': logs,
|
||
'adjustments': adjustments,
|
||
'base_pay': base_pay,
|
||
'adjustments_net': adjustments_net,
|
||
'adjustments_net_abs': abs(adjustments_net),
|
||
'deductive_types': DEDUCTIVE_TYPES,
|
||
'is_advance': advance_adj is not None,
|
||
'advance_adj': advance_adj,
|
||
'is_loan': loan_adj is not None,
|
||
'loan_adj': loan_adj,
|
||
}
|
||
return render(request, 'core/payslip.html', context)
|
||
|
||
|
||
# =============================================================================
|
||
# === CREATE EXPENSE RECEIPT ===
|
||
# Single-page form for recording business expenses.
|
||
# Supports dynamic line items (products + amounts) and VAT calculation.
|
||
# On save: emails an HTML + PDF receipt to Spark Receipt for accounting.
|
||
# =============================================================================
|
||
|
||
@login_required
|
||
def create_receipt(request):
|
||
"""Create a new expense receipt and email it to Spark Receipt."""
|
||
if not is_staff_or_supervisor(request.user):
|
||
return redirect('home')
|
||
|
||
if request.method == 'POST':
|
||
form = ExpenseReceiptForm(request.POST)
|
||
items = ExpenseLineItemFormSet(request.POST)
|
||
|
||
if form.is_valid() and items.is_valid():
|
||
# Save the receipt header (but don't commit yet — need to set user)
|
||
receipt = form.save(commit=False)
|
||
receipt.user = request.user
|
||
# Set temporary zero values so the first save doesn't fail.
|
||
# (subtotal and total_amount have no default in the model,
|
||
# so they'd be NULL — which MariaDB rejects.)
|
||
# We'll recalculate these properly after saving line items.
|
||
receipt.subtotal = Decimal('0.00')
|
||
receipt.vat_amount = Decimal('0.00')
|
||
receipt.total_amount = Decimal('0.00')
|
||
receipt.save()
|
||
|
||
# Save line items — link them to this receipt
|
||
items.instance = receipt
|
||
line_items = items.save()
|
||
|
||
# === BACKEND VAT CALCULATION ===
|
||
# The frontend shows live totals, but we recalculate on the server
|
||
# using Python Decimal for accuracy (no floating-point rounding errors).
|
||
sum_amount = sum(item.amount for item in line_items)
|
||
vat_type = receipt.vat_type
|
||
|
||
if vat_type == 'Included':
|
||
# "VAT Included" means the entered amounts already include 15% VAT.
|
||
# To find the pre-VAT subtotal: divide by 1.15
|
||
# Example: R100 entered → Subtotal R86.96, VAT R13.04, Total R100
|
||
receipt.total_amount = sum_amount
|
||
receipt.subtotal = (sum_amount / Decimal('1.15')).quantize(Decimal('0.01'))
|
||
receipt.vat_amount = receipt.total_amount - receipt.subtotal
|
||
elif vat_type == 'Excluded':
|
||
# "VAT Excluded" means the entered amounts are pre-VAT.
|
||
# Add 15% on top for the total.
|
||
# Example: R100 entered → Subtotal R100, VAT R15, Total R115
|
||
receipt.subtotal = sum_amount
|
||
receipt.vat_amount = (sum_amount * Decimal('0.15')).quantize(Decimal('0.01'))
|
||
receipt.total_amount = receipt.subtotal + receipt.vat_amount
|
||
else:
|
||
# "None" — no VAT applies
|
||
receipt.subtotal = sum_amount
|
||
receipt.vat_amount = Decimal('0.00')
|
||
receipt.total_amount = sum_amount
|
||
|
||
receipt.save()
|
||
|
||
# =================================================================
|
||
# EMAIL RECEIPT (same pattern as payslip email)
|
||
# If email fails, the receipt is still saved.
|
||
# =================================================================
|
||
|
||
# Lazy import — avoids crashing the app if xhtml2pdf isn't installed
|
||
from .utils import render_to_pdf
|
||
|
||
subject = f"Receipt from {receipt.vendor_name} - {receipt.date}"
|
||
email_context = {
|
||
'receipt': receipt,
|
||
'items': line_items,
|
||
}
|
||
|
||
# 1. Render HTML email body
|
||
html_message = render_to_string(
|
||
'core/email/receipt_email.html', email_context
|
||
)
|
||
plain_message = strip_tags(html_message)
|
||
|
||
# 2. Render PDF attachment (returns None if xhtml2pdf is not installed)
|
||
pdf_content = render_to_pdf(
|
||
'core/pdf/receipt_pdf.html', email_context
|
||
)
|
||
|
||
# 3. Send email with PDF attached
|
||
recipient = getattr(settings, 'SPARK_RECEIPT_EMAIL', None)
|
||
if recipient:
|
||
try:
|
||
email = EmailMultiAlternatives(
|
||
subject,
|
||
plain_message,
|
||
settings.DEFAULT_FROM_EMAIL,
|
||
[recipient],
|
||
)
|
||
email.attach_alternative(html_message, "text/html")
|
||
|
||
if pdf_content:
|
||
email.attach(
|
||
f"Receipt_{receipt.id}.pdf",
|
||
pdf_content,
|
||
'application/pdf'
|
||
)
|
||
|
||
email.send()
|
||
messages.success(
|
||
request,
|
||
'Receipt created and sent to SparkReceipt.'
|
||
)
|
||
except Exception as e:
|
||
messages.warning(
|
||
request,
|
||
f'Receipt saved, but email failed: {str(e)}'
|
||
)
|
||
else:
|
||
messages.success(request, 'Receipt saved successfully.')
|
||
|
||
# Redirect back to a blank form for the next receipt
|
||
return redirect('create_receipt')
|
||
|
||
else:
|
||
# GET request — show a blank form with today's date
|
||
form = ExpenseReceiptForm(initial={'date': timezone.now().date()})
|
||
items = ExpenseLineItemFormSet()
|
||
|
||
return render(request, 'core/create_receipt.html', {
|
||
'form': form,
|
||
'items': items,
|
||
})
|
||
|
||
|
||
# =============================================================================
|
||
# === IMPORT DATA (TEMPORARY) ===
|
||
# Runs the import_production_data command from the browser.
|
||
# Visit /import-data/ once to populate the database. Safe to re-run.
|
||
# REMOVE THIS VIEW once data is imported.
|
||
# =============================================================================
|
||
|
||
def import_data(request):
|
||
"""Runs the import_production_data management command from the browser."""
|
||
from django.core.management import call_command
|
||
from io import StringIO
|
||
|
||
output = StringIO()
|
||
try:
|
||
call_command('import_production_data', stdout=output)
|
||
result = output.getvalue()
|
||
lines = result.replace('\n', '<br>')
|
||
return HttpResponse(
|
||
'<html><body style="font-family: monospace; padding: 20px;">'
|
||
'<h2>Import Complete!</h2>'
|
||
'<div>' + lines + '</div>'
|
||
'<br><br>'
|
||
'<a href="/admin/">Go to Admin Panel</a> | '
|
||
'<a href="/payroll/">Go to Payroll Dashboard</a> | '
|
||
'<a href="/">Go to Dashboard</a>'
|
||
'</body></html>'
|
||
)
|
||
except Exception as e:
|
||
return HttpResponse(
|
||
'<html><body style="font-family: monospace; padding: 20px; color: red;">'
|
||
'<h2>Import Error</h2>'
|
||
'<pre>' + str(e) + '</pre>'
|
||
'</body></html>',
|
||
status=500,
|
||
)
|
||
|
||
|
||
# =============================================================================
|
||
# === RUN MIGRATIONS ===
|
||
# Runs pending database migrations from the browser. Useful when Flatlogic's
|
||
# "Pull Latest" doesn't automatically run migrations after a code update.
|
||
# Visit /run-migrate/ to apply any pending migrations to the production DB.
|
||
# =============================================================================
|
||
|
||
def run_migrate(request):
|
||
"""Runs Django migrate from the browser to apply pending migrations."""
|
||
from django.core.management import call_command
|
||
from io import StringIO
|
||
|
||
output = StringIO()
|
||
try:
|
||
call_command('migrate', stdout=output)
|
||
result = output.getvalue()
|
||
lines = result.replace('\n', '<br>')
|
||
return HttpResponse(
|
||
'<html><body style="font-family: monospace; padding: 20px;">'
|
||
'<h2>Migrations Complete!</h2>'
|
||
'<div>' + lines + '</div>'
|
||
'<br><br>'
|
||
'<a href="/">Go to Dashboard</a> | '
|
||
'<a href="/payroll/">Go to Payroll Dashboard</a>'
|
||
'</body></html>'
|
||
)
|
||
except Exception as e:
|
||
return HttpResponse(
|
||
'<html><body style="font-family: monospace; padding: 20px; color: red;">'
|
||
'<h2>Migration Error</h2>'
|
||
'<pre>' + str(e) + '</pre>'
|
||
'</body></html>',
|
||
status=500,
|
||
)
|
||
|
||
|
||
# === BACKUP / RESTORE (browser-accessible, admin-only) ===
|
||
# Flatlogic has no shell/SSH — admins need to backup and restore via browser.
|
||
# These views wrap the `backup_data` and `restore_data` management commands
|
||
# and render a minimal HTML UI. Safe to leave in place in production.
|
||
|
||
@login_required
|
||
def backup_data(request):
|
||
"""Download the complete app data as a timestamped JSON file.
|
||
|
||
Admin-only. Serves the backup as a browser download so it lands
|
||
safely on the admin's laptop rather than the server filesystem
|
||
(which is ephemeral on Flatlogic).
|
||
"""
|
||
if not is_admin(request.user):
|
||
return HttpResponseForbidden("Admin access required.")
|
||
|
||
from core.management.commands.backup_data import build_backup_payload
|
||
|
||
json_str, summary = build_backup_payload()
|
||
filename = f'foxlog_backup_{datetime.datetime.now().strftime("%Y%m%d_%H%M%S")}.json'
|
||
|
||
response = HttpResponse(json_str, content_type='application/json')
|
||
response['Content-Disposition'] = f'attachment; filename="{filename}"'
|
||
return response
|
||
|
||
|
||
@login_required
|
||
def restore_data(request):
|
||
"""Upload a .json backup to restore it into the current database.
|
||
|
||
GET → renders a minimal upload form + warning
|
||
POST → accepts the file, validates, and loads it (inside a transaction)
|
||
|
||
Admin-only. Requires explicit `confirm=yes` POST field to proceed,
|
||
so a stray click can't wipe production.
|
||
"""
|
||
if not is_admin(request.user):
|
||
return HttpResponseForbidden("Admin access required.")
|
||
|
||
from core.management.commands.restore_data import (
|
||
check_database_is_populated, restore_from_json_string,
|
||
)
|
||
|
||
db_has_data = check_database_is_populated()
|
||
|
||
if request.method == 'POST':
|
||
if request.POST.get('confirm') != 'yes':
|
||
return HttpResponse(
|
||
'<html><body style="font-family: monospace; padding: 20px; color: red;">'
|
||
'<h2>Restore cancelled</h2>'
|
||
'<p>You must tick the "Yes, I understand" checkbox to proceed.</p>'
|
||
'<a href="/restore-data/">Back</a></body></html>',
|
||
status=400,
|
||
)
|
||
|
||
uploaded = request.FILES.get('backup_file')
|
||
if not uploaded:
|
||
return HttpResponse(
|
||
'<html><body style="font-family: monospace; padding: 20px; color: red;">'
|
||
'<h2>No file uploaded</h2>'
|
||
'<a href="/restore-data/">Back</a></body></html>',
|
||
status=400,
|
||
)
|
||
|
||
json_str = uploaded.read().decode('utf-8', errors='replace')
|
||
ok, result = restore_from_json_string(json_str)
|
||
|
||
if not ok:
|
||
return HttpResponse(
|
||
'<html><body style="font-family: monospace; padding: 20px; color: red;">'
|
||
'<h2>Restore failed</h2>'
|
||
f'<pre>{result}</pre>'
|
||
'<a href="/restore-data/">Back</a></body></html>',
|
||
status=500,
|
||
)
|
||
|
||
rows_html = '<br>'.join(f'{k}: {v}' for k, v in result.items())
|
||
return HttpResponse(
|
||
'<html><body style="font-family: monospace; padding: 20px;">'
|
||
'<h2 style="color: #10b981;">Restore complete!</h2>'
|
||
f'<div>{rows_html}</div><br><br>'
|
||
'<a href="/">Go to Dashboard</a></body></html>'
|
||
)
|
||
|
||
# GET — render the upload form
|
||
warning_html = ''
|
||
if db_has_data:
|
||
warning_html = (
|
||
'<p style="color: #e8851a; border-left: 3px solid #e8851a; padding-left: 10px;">'
|
||
'<strong>⚠️ Warning:</strong> this database already contains data '
|
||
'(workers / work logs / payroll records). Restoring will UPDATE existing rows '
|
||
'by primary key and INSERT missing ones. This will NOT delete data that exists '
|
||
'in the DB but not in the backup. If you want a clean restore, run '
|
||
'<code>python manage.py flush</code> first (irreversible).'
|
||
'</p>'
|
||
)
|
||
|
||
return HttpResponse(
|
||
'<html><body style="font-family: monospace; padding: 20px; max-width: 700px;">'
|
||
'<h2>Restore from backup</h2>'
|
||
+ warning_html +
|
||
'<form method="post" enctype="multipart/form-data">'
|
||
f'<input type="hidden" name="csrfmiddlewaretoken" value="{get_token(request)}">'
|
||
'<p><label>Backup JSON file:<br>'
|
||
'<input type="file" name="backup_file" accept="application/json" required></label></p>'
|
||
'<p><label><input type="checkbox" name="confirm" value="yes" required> '
|
||
'Yes, I understand this will overwrite matching rows in the database.</label></p>'
|
||
'<p><button type="submit" style="padding: 10px 20px; background: #e8851a; '
|
||
'color: white; border: none; border-radius: 4px; cursor: pointer;">'
|
||
'Restore</button>'
|
||
' <a href="/" style="margin-left: 10px;">Cancel</a></p>'
|
||
'</form></body></html>'
|
||
)
|