
Beating the 6-Minute Limit: How to Run Long Apps Script Jobs Without Cloud Functions
There comes a point in every Apps Script project where the same error fires three days in a row and quietly ruins your weekend:
Exception: Service Spreadsheets timed out while accessing document...
Exception: Exceeded maximum execution time
You wrote a script that was supposed to import 50,000 Magento orders into a Sheet. It worked beautifully on the first 8,000 rows, then died at the six-minute mark with no rollback and no progress checkpoint. Half your data is in, half your sheet is locked, and your two apparent options are (a) rewrite the whole thing as a Cloud Function or (b) pray.
The truth is that you almost never need to leave Apps Script for long-running work. The 6-minute limit is a hard cap on a single invocation, not on a job. With three patterns — self-rescheduling triggers, chunked state with PropertiesService, and parallel fan-out with LockService — you can run jobs that take hours, survive crashes, and resume from where they left off. All without touching Google Cloud.
Why the 6-Minute Limit Exists (And the Numbers You Need to Know)
Google enforces this limit because Apps Script runs on shared multi-tenant infrastructure. A runaway loop with no upper bound would starve the platform; the timeout is a circuit breaker, not a punishment.
The exact numbers, verified against the Google Workspace quotas page in 2026, depend on which side of the line you are on:
- Free Gmail / consumer accounts: 6 minutes per execution
- Workspace (Business / Enterprise): 6 minutes per execution (paying does not raise this)
- Total trigger runtime per day: 90 minutes free / 6 hours Workspace
- Simultaneous executions: 30 concurrent
- Web app
doGetanddoPost: a separate 30-second hard timeout for HTTP responses
Read those last two carefully. The 6-minute limit is per invocation. The 90-minute or 6-hour cap is the daily total of all your trigger-driven runs combined. And web app entry points die at 30 seconds, not 6 minutes — webhook handlers must be fast or hand work off to a queue.
If you stack the patterns below, you can comfortably run 14 self-rescheduled 5-minute chunks on a free account, or 60+ on Workspace, all within a single calendar day. That is enough headroom for almost any small or medium business workload.
The Three Patterns That Solve 95% of Long Jobs
Before reaching for Cloud Run, decide which of these matches the shape of your work:
- Self-rescheduling trigger — the workhorse. The script processes what it can, saves a cursor, and schedules itself to resume.
- Chunked state with PropertiesService — for jobs where you know the total size up front (e.g. "rebuild dashboards for 200 stores") and want deterministic progress visibility.
- Parallel fan-out with LockService — when chunks are independent and can be processed by separate scheduled invocations running side by side.
Pick the simplest pattern that fits. Most Magento syncs, BI rebuilds, and CRM enrichments are fine with pattern 1. Patterns 2 and 3 are for serious throughput.
Pattern 1: The Self-Rescheduling Trigger
This is the pattern you will reach for 80% of the time. The script does as much work as it can within 5 minutes (leaving a 1-minute safety margin), saves its progress to script properties, and creates a one-shot trigger to call itself again in a minute.
// Code.gs
const MAX_RUNTIME_MS = 5 * 60 * 1000; // 5 min, with 1 min safety
const PROP_KEY = 'IMPORT_CURSOR';
function processOrders() {
const start = Date.now();
const props = PropertiesService.getScriptProperties();
let cursor = parseInt(props.getProperty(PROP_KEY) || '0', 10);
const sheet = SpreadsheetApp.getActive().getSheetByName('Orders');
const totalOrders = getTotalOrderCount(); // from your source API
while (cursor < totalOrders) {
if (Date.now() - start > MAX_RUNTIME_MS) {
// Out of time — save state and reschedule
props.setProperty(PROP_KEY, String(cursor));
scheduleSelf('processOrders');
return;
}
const batch = fetchOrderBatch(cursor, 100); // 100 at a time
appendRowsToSheet(sheet, batch);
cursor += batch.length;
}
// Done!
props.deleteProperty(PROP_KEY);
cleanupTriggers('processOrders');
Logger.log('Import complete: ' + cursor + ' orders');
}
function scheduleSelf(handler) {
cleanupTriggers(handler);
ScriptApp.newTrigger(handler)
.timeBased()
.after(60 * 1000) // 1 minute from now
.create();
}
function cleanupTriggers(handler) {
ScriptApp.getProjectTriggers()
.filter(t => t.getHandlerFunction() === handler)
.forEach(t => ScriptApp.deleteTrigger(t));
}
The two non-obvious points: leave a safety margin (5 minutes, not 6), because the timer does not include the time between the last if check and the actual exit, and always clean up old triggers before creating a new one — otherwise you accumulate dozens of orphaned schedules over a long job and eventually hit the 20-trigger project cap.
This single pattern, given enough days, can move a million rows. It is slower than a Cloud Run job, but it costs $0 and has zero ops surface.
Pattern 2: Chunked State for Deterministic Progress
If your job has a known total — "rebuild this dashboard for 200 stores" — you want progress visibility, not just a cursor. Store a small JSON state object in PropertiesService and surface it in a status sheet your team can refresh.
function rebuildAllDashboards() {
const start = Date.now();
const props = PropertiesService.getScriptProperties();
const state = JSON.parse(props.getProperty('REBUILD_STATE') || '{}');
if (!state.queue) {
state.queue = getStoreList().map(s => s.id);
state.completed = [];
state.failed = [];
}
while (state.queue.length > 0) {
if (Date.now() - start > 5 * 60 * 1000) break;
const storeId = state.queue.shift();
try {
rebuildOneDashboard(storeId);
state.completed.push(storeId);
} catch (err) {
state.failed.push({ storeId, error: err.message });
}
props.setProperty('REBUILD_STATE', JSON.stringify(state));
writeStatusToSheet(state); // visible progress for your team
}
if (state.queue.length > 0) {
scheduleSelf('rebuildAllDashboards');
} else {
props.deleteProperty('REBUILD_STATE');
}
}
Two hidden upgrades. First, the queue is mutated and persisted on every iteration, so a crash mid-store never loses more than one item. Second, failures do not kill the job — they are logged into state.failed and the script moves on. You sweep the failed list at the end and decide whether to retry.
PropertiesService writes are not free (around 100ms each), so do not persist after every single row. Persist after every chunk of work that costs more than the write itself.
Pattern 3: Parallel Fan-Out with LockService
When chunks are truly independent — say, enriching 10,000 product descriptions with AI — running them serially is wasteful. Apps Script supports up to 30 simultaneous executions per project, so you can split the work across N parallel triggers and merge results.
function fanOutEnrichment() {
const totalProducts = 10000;
const numWorkers = 10;
const chunkSize = Math.ceil(totalProducts / numWorkers);
for (let i = 0; i < numWorkers; i++) {
const start = i * chunkSize;
const end = Math.min(start + chunkSize, totalProducts);
PropertiesService.getScriptProperties()
.setProperty(`WORKER_${i}_RANGE`, JSON.stringify({ start, end }));
ScriptApp.newTrigger(`worker${i}`)
.timeBased()
.after(i * 1000) // stagger by 1s to avoid quota spikes
.create();
}
}
function workerGeneric(workerIndex) {
const lock = LockService.getScriptLock();
const range = JSON.parse(
PropertiesService.getScriptProperties()
.getProperty(`WORKER_${workerIndex}_RANGE`)
);
for (let i = range.start; i < range.end; i++) {
const enriched = enrichProduct(i); // hits OpenAI / Gemini
lock.waitLock(30000);
try {
writeEnrichedRow(i, enriched);
} finally {
lock.releaseLock();
}
}
}
LockService is the critical detail. When ten workers all want to write to the same sheet, two of them touching the same row at the same moment produces a corrupted cell. The lock serializes the actual write while letting the slow part (the AI call) run in parallel.
This pattern moves a 6-hour serial job down to about 40 minutes for free. It is also the closest Apps Script gets to feeling like a real distributed job runner.
The Foot-Guns Nobody Warns You About
A few traps that bite production scripts:
Trigger orphaning. If your script crashes before cleanupTriggers(), you accumulate triggers forever. Always wrap the whole job in try/catch and clean up in finally.
PropertiesService write contention. Two simultaneous writers to the same key will silently overwrite each other. Use LockService whenever you have parallel writes, or shard your keys per worker (STATE_WORKER_${i}).
Web app vs trigger timeout confusion. A doPost handler dies at 30 seconds. If your webhook needs to do 4 minutes of work, the handler must enqueue (write to a Sheet, return 200) and a separate trigger must drain the queue. We covered the webhook side in our Apps Script webhooks guide — that piece pairs naturally with this one.
Quota stacking. Long jobs typically also burn UrlFetchApp quota, Gmail send quota, and the daily total trigger runtime. Hitting any one of these silently kills the rest. Sprinkle console.log('quota: ' + i) checkpoints so you know which ceiling you are touching.
Session and auth state lost between invocations. If your script holds a logged-in user session, that session does not survive across reschedules. Store the session token in PropertiesService just like any other state — the same pattern we documented in our Apps Script auth and state guide.
When to Stop Fighting and Move to Cloud Run
There is a real boundary. Move out of Apps Script when:
- A single conceptual job genuinely needs more than 30 minutes of sustained CPU
- You need more than 30 concurrent workers
- You have a job DAG with dependencies (job A must finish before job B starts) that would require coordinating ten triggers manually
- Your team already has a Go or Node.js skillset and the maintenance cost of Apps Script gymnastics outweighs the simplicity
Until you hit one of those, the patterns above scale further than people assume. We have seen Magento-to-Sheets pipelines move 200,000 rows nightly on a free Workspace account, using nothing but pattern 1 plus a cleanup sweep. The architecture is documented end-to-end in our Magento order sync guide; the long-job patterns here are what keep it stable past the first few thousand rows.
If your team is staring at the 6-minute error and wondering whether to migrate the whole thing, the answer is usually no — you just need one of the three patterns above. We help teams pick the right one and ship the production version through MageSheet's Apps Script consulting practice.
Frequently Asked Questions
Will my script silently lose data when it hits the 6-minute execution limit?
Yes, if you don't checkpoint. When the timeout fires, the current invocation is hard-killed mid-execution — anything not yet flushed to the Sheet or PropertiesService is gone, and there is no automatic rollback. The self-rescheduling pattern in this guide checkpoints after every batch, so the worst-case loss is whatever the last batch was processing (typically 100 rows or fewer). Without checkpointing, you can lose hours of work in a single timeout.
Does upgrading to Google Workspace Business or Enterprise remove the 6-minute limit?
No. The per-invocation 6-minute cap is identical on free Gmail accounts and on paid Workspace plans in 2026 — Google has not changed this in years. What Workspace does upgrade is the daily total trigger runtime (from 90 minutes to 6 hours cumulative) and the UrlFetchApp quota (from 20,000 to 100,000 calls per day). For long-job patterns, the daily ceiling matters more than the per-run cap.
Can I just use Utilities.sleep or a setTimeout-style trick to keep the script alive past 6 minutes?
No. Apps Script does not expose setTimeout, and Utilities.sleep counts toward the 6-minute budget — sleeping 30 seconds gives you 5 minutes 30 seconds of compute, not 6 minutes plus 30 seconds. The only legitimate escape is to actually exit the function and reschedule it via a one-shot time-based trigger. Anything else is a bug waiting to fire.
How do I avoid accumulating dozens of orphaned triggers from a crashed self-rescheduling script?
Two rules. First, always clean up old handlers of the same name before creating a new one — the cleanupTriggers() helper in this guide does exactly that. Second, wrap the main job loop in try/catch with cleanup in the finally block, so a thrown exception cannot escape without removing its trigger. If you have already accumulated orphans, run ScriptApp.getProjectTriggers().forEach(t => ScriptApp.deleteTrigger(t)) once from the editor to wipe the slate. Apps Script caps you at 20 triggers per project, so neglected orphans will eventually break new schedules.
Is there a hard ceiling on how much work these patterns can move per day?
Yes — the daily total trigger runtime. On a free account, all your triggers combined cannot exceed 90 minutes of CPU per day. On Workspace it is 6 hours. That means a self-rescheduling job using 5-minute chunks can process at most 18 chunks per day on free or 72 chunks on Workspace. If your job legitimately needs more than 6 hours of sustained compute every day, that is the boundary where moving to Cloud Run starts to make economic sense.



