OnSched API & dashboard 3.5.0 — unavailability rebuild, allocation legacy mapping, V1 → V3 migration coverage
Unavailability calendar rebuild & recurring-block timezones
Public docs: Unavailability blocks, Recurring blocks.
Unavailability calendar (GET /v3/unavailability)
GET /v3/unavailability)- Responses are one row per concrete interval, tagged with
source(weekly,recurring,appointment,holiday,block) and the affectedentity_type/entity_id. Rows are not merged. - The
roundRobinquery parameter is removed (previously unused for distinct merged modes). ServiceIdis optional; scope still requires at least one ofLocationIds,ResourceIds, orServiceId.GET /v3/unavailability/calendaris deprecated and returns the same payload asGET /v3/unavailability— migrate callers to the canonical path.
Stored blocks
DELETE /v3/unavailability/blocks/:idremoves an out-of-office (OOF) block by id (company ownership enforced). Use native flows for appointment-linked rows.
Recurring blocks
- Rules persist
ianawith wall-clockstartTime/endTime; expansions respect DST and fractional UTC offsets. Availability and calendar queries share this interpretation.
Integrators
- Update any UI that assumed merged
start_time/end_timeonly, or that passedroundRobin. - Expect snake_case fields on calendar rows (
start_time,entity_type, …).
Dashboard — recurring unavailability on Availability tabs
See also Recurring blocks.
Merchant dashboard
On the Availability tab for locations, services, and resources, you can now manage recurring unavailable periods in addition to one-off blocks:
- List recurring rules with schedule summary, wall-clock time window, timezone, and active date range.
- Add or edit a rule (name, frequency, interval, start/end dates, times, weekday selection for weekly/biweekly, day-of-month for monthly; yearly repeats on the month and day of the start date).
- Delete a rule with confirmation.
Behavior matches the existing /v3/unavailability/recurringBlock APIs (structured recurrence, not RFC 5545 RRULE strings). One-off blocks are still saved with the main Save action; recurring rules save immediately from the recurring-block dialog.
V1 → V3 migration sync expands service field coverage
Guide: Migrating from V1.
Migration
POST /v3/migration/sync now persists the full set of mappable Service fields when creating V3 services from V1, instead of only name, description, duration, weekly availability, and the schedule-vs-allocation type. Migrated services now also retain:
- Duration options:
durationSelect,durationMin,durationMax,durationInterval. - Book-ahead window:
bookAheadUnit,bookAheadValue,bookInAdvance. - Capacity and limits:
bookingLimit,bookingInterval,padding,bookingsPerSlot(from V1maxCapacity),dailyBookingLimitCount,dailyBookingLimitMinutes,maxBookingLimit,maxResourceBookingLimit. - Fees:
feeAmount,feeTaxable,cancellationFeeAmount,cancellationFeeTaxable,nonRefundable. - Other:
imageUrl,showOnline,roundRobin(V1 integer mapped to V3NONE/RANDOM/BALANCED/COMBINED), and custom fields (field1–field10).
Type → availabilityType mapping (unchanged, now documented)
The mapping from V1 type to V3 availabilityType is unchanged but worth restating: V1 type=1 (Appointment) maps to V3 availabilityType=schedule; V1 type=2 (Event) maps to V3 availabilityType=allocation.
Re-running migration
Existing migrated tenants can backfill these fields by re-running POST /v3/migration/sync. The migration is idempotent on legacyId, so previously migrated services that already exist in V3 will be skipped — to pick up the expanded coverage on those rows, delete the V3 service (or update it manually) before re-syncing.
Fields still not migrated
Some V1 fields remain unmapped because V3 has no equivalent column or the concept has been retired (serviceGroupId/serviceGroupName, calendarId/calendarResourceGroupId, mediaPageUrl, defaultService, consumerPadding, maxGroupSize). For those, use native /v3/* endpoints to configure the equivalent V3 behavior after migration.
Allocation legacy IDs and mapping
Guides: Weekly allocations, Single allocations.
Mapping
GET /v3/mapping/idsacceptsallocationId(V1 allocation ID). The response value is always a JSON array of matching V3 UUIDs (weekly and/or single allocation rows). Scalar keys (locationId,serviceId, etc.) remain a single UUID ornull.
Allocations
POST /v3/singleAllocation/setSingleAllocationsandPOST /v3/weeklyAllocation/setWeeklyAllocationsaccept an optionallegacyIdon each allocation item so migrated data can retain the V1 identifier.
Existing Stage tenants can repopulate legacyId by re-running the V1 migration sync (or an allocation-only sync) so allocations are recreated from V1 with IDs attached.
