
Jeevan is a data engineer with 6+ years of experience in data engineering and analytics. He currently works at Rectangle Health, building cloud-scale data pipelines and analytics solutions on AWS. His work includes deploying churn prediction ML models in SageMaker that help safeguard $8.3M ARR, and building multiple AWS QuickSight dashboards used by executive and business teams. Previously, he worked at Publicis Sapient, where he led ETL migrations and delivered Power BI solutions for large user groups.
AWS QuickSight is Amazon's cloud-native business intelligence service that integrates seamlessly with your AWS data stack. The real power isn't just visualization — it's SPICE, the in-memory engine that lets you query billions of rows in seconds without touching your production databases. I've built multiple dashboards with QuickSight and replaced 12 Excel workbooks, saving my team 10 hours per week. This guide covers everything I wish I knew before my first QuickSight project.
- What AWS QuickSight is and how it differs from Power BI and Tableau
- How SPICE works and when to use it vs. Direct Query
- Architecture patterns for connecting QuickSight to S3, Redshift, and Glue
- Real-world lessons from building production dashboards at scale
- How I replaced 12 Excel workbooks and saved 10 hours per week
- Common mistakes that derail QuickSight implementations
Quick Answers
What is AWS QuickSight?
AWS QuickSight is Amazon's cloud-native business intelligence service for creating interactive dashboards and visualizations. It connects to AWS data sources like S3, Redshift, Athena, and RDS, plus external sources like Salesforce and on-premise databases. Its standout feature is SPICE, an in-memory calculation engine that enables fast queries without loading your production systems.
How much does AWS QuickSight cost?
QuickSight offers flexible pricing: Authors cost $24/month, Readers cost $3/month per user. For embedded or high-volume use cases, capacity pricing offers bulk sessions at $0.50/session. SPICE storage costs $0.38 per GB/month. The low Reader cost makes QuickSight cost-effective for organizations scaling analytics across many users.
Is AWS QuickSight better than Power BI?
It depends on your stack. QuickSight excels in AWS-native environments with seamless S3, Redshift, and Athena integration. Power BI is stronger for Microsoft-heavy environments and offers more advanced visualization options. QuickSight's pay-per-session model can be cheaper for organizations with many occasional users.
What is SPICE in QuickSight?
SPICE (Super-fast, Parallel, In-memory Calculation Engine) is QuickSight's in-memory data store. When you import data into SPICE, it's cached in AWS's infrastructure, enabling sub-second query performance without hitting your source databases. This protects production systems and enables fast dashboard interactions even with billions of rows.
What is AWS QuickSight?
AWS QuickSight is a cloud-native, serverless business intelligence service that lets you create interactive dashboards, perform ad-hoc analysis, and share insights across your organization. It's fully managed — no servers to provision, no software to install — and scales automatically to handle thousands of concurrent users.
When I first started at Rectangle Health, the analytics landscape was chaos. Twelve Excel workbooks, each maintained by different people, with formulas referencing other workbooks that may or may not be up to date. Sound familiar?
QuickSight solved this problem, but not just because it's a dashboarding tool. The real value is how it fits into the AWS ecosystem. If your data lives in S3, Redshift, Athena, or RDS, QuickSight connects natively without complex ETL or connector licensing.
The platform has evolved significantly since its 2016 launch. Today, it includes:
- QuickSight Q: Natural language queries (ask questions in plain English)
- ML Insights: Anomaly detection and forecasting built-in
- Embedded Analytics: Embed dashboards in your own applications
- Row-Level Security: Control data access at the user level
- SPICE: The in-memory engine that changes everything
QuickSight's value isn't just visualization — it's the seamless integration with AWS services and SPICE's ability to enable fast analytics without overloading production databases.
Why I Chose QuickSight Over Power BI
I've used Power BI extensively — at Publicis Sapient, I automated marketing KPI dashboards for 600 users. Power BI is excellent, especially in Microsoft-heavy environments. But when your data stack is AWS-native, QuickSight has distinct advantages.
The AWS-Native Advantage
At Rectangle Health, our data architecture looks like this:
- S3 stores raw data (2 TB daily ingestion)
- AWS Glue catalogs and transforms data
- Redshift serves as our data warehouse
- dbt manages transformation logic
- Airflow orchestrates everything
QuickSight slots into this architecture without friction. There's no gateway to configure (like Power BI's On-Premises Data Gateway), no ODBC drivers to maintain, no authentication headaches. You grant QuickSight an IAM role, and it reads from your data sources directly.
The Pricing Reality
Here's where QuickSight surprised me. At $3/month per Reader (compared to Power BI Pro at $10/user/month), the cost advantage is significant when scaling analytics across an organization.
At Rectangle Health, we have executives who check dashboards alongside analysts who use them daily. Everyone pays the same Reader price regardless of usage frequency, but that price is 70% less than Power BI Pro.
For a 50-person organization with 10 power users (authors) and 40 dashboard consumers (readers), the math looks like this:
- Power BI Pro: 50 × $10 = $500/month
- QuickSight: 10 authors × $24 + 40 readers × $3 = $360/month
For embedded analytics or high-volume scenarios, QuickSight offers capacity-based pricing (bulk sessions) that can reduce costs further.
QuickSight's $3/month Reader pricing makes organization-wide analytics affordable. For embedded use cases with unpredictable user counts, explore capacity pricing for bulk sessions.
Choose QuickSight when your data lives in AWS and you have many occasional dashboard users. Choose Power BI when you're in the Microsoft ecosystem or need the most advanced visualization options.
Understanding SPICE: The Secret Weapon
SPICE (Super-fast, Parallel, In-memory Calculation Engine) is QuickSight's in-memory data store. When you import data into SPICE, it's cached in AWS's managed infrastructure, enabling sub-second query performance. SPICE data refreshes can be scheduled or triggered via API, and capacity is billed at $0.38 per GB/month.
SPICE is the feature that changed how I think about business intelligence. Before SPICE, every dashboard interaction meant a query to the source database. Heavy dashboard usage could slow down production systems. DBAs would complain about BI tools hammering their servers.
SPICE decouples dashboards from source systems. Your data is loaded into SPICE once (on a schedule you control), and all dashboard interactions hit the SPICE cache instead of your database.
How SPICE Actually Works
When you create a dataset in QuickSight, you choose between:
- SPICE Import: Data is copied into SPICE storage
- Direct Query: Queries hit the source database in real-time
For SPICE imports, the process is:
- QuickSight executes your dataset query against the source
- Results are loaded into SPICE (compressed, columnar storage)
- Dashboard interactions query SPICE, not the source
- You schedule refreshes to keep SPICE data current
At Rectangle Health, our SPICE datasets refresh every 15 minutes via Airflow. This means dashboards are at most 15 minutes behind reality — acceptable for most business decisions, and it protects our Redshift cluster from dashboard query load.
SPICE Optimization Techniques
After building multiple dashboards, here's what I've learned about SPICE optimization:
1. Pre-aggregate when possible
Don't import raw transaction data if you only need daily summaries. Aggregate in your dbt models or SQL before loading to SPICE. This reduces SPICE storage costs and improves query performance.
2. Use incremental refreshes
For large datasets, full refreshes are expensive. QuickSight supports incremental refresh — you define a date field, and only recent data is reloaded. We use this for our transaction data, refreshing only the last 7 days each cycle.
3. Remove unused columns
Every column in your SPICE dataset consumes storage. If a column isn't used in any visual, exclude it from the dataset. I've seen datasets cut by 40% just by removing unused fields.
4. Partition large datasets
Rather than one massive dataset, consider partitioning by time period or business unit. This enables faster refreshes and more granular access control.
SPICE has a 1 billion row / 1 TB limit per dataset in Enterprise edition. If you're working with larger datasets, you'll need to pre-aggregate, partition, or use Direct Query for those specific analyses.
SPICE transforms QuickSight from a query tool into a true analytics platform. By caching data in-memory, you get fast dashboards without impacting production systems. Invest time in optimizing your SPICE datasets — it pays off in performance and cost savings.
Building Dashboards That Executives Actually Use
Here's a hard truth: most dashboards fail not because of technical problems, but because nobody uses them. I've seen beautifully designed dashboards that executives ignore because they don't answer the questions that matter.
Start with the Decision, Not the Data
Before building any dashboard, I ask three questions:
- What decision will this dashboard inform?
- Who makes that decision, and how often?
- What's the minimum data needed to make that decision confidently?
At Rectangle Health, our merchant churn dashboard exists because the leadership team needs to decide where to focus retention efforts. The dashboard shows at-risk merchants, their revenue impact, and the factors driving churn risk. Every metric ties to a specific decision.
The 5-Second Rule
Executives are busy. If they can't understand the main story within 5 seconds of opening a dashboard, they'll close it.
This means:
- Lead with the headline: Put the most important metric at the top, largest
- Use clear labels: "Merchants at Churn Risk: 1,247" not "CRN_CNT_MTD"
- Show context: Is 1,247 good or bad? Compare to last month, last year, target
- Minimize scrolling: The key story should be visible without scrolling
The dashboards that get used aren't the ones with the most charts. They're the ones that answer "so what?" within five seconds of opening.
Dashboard Design Patterns That Work
The Executive Summary Pattern
Top row: 3-4 KPI cards showing headline metrics with trend arrows Middle section: One or two key charts (usually time series + breakdown) Bottom section: Drill-down details for those who want to explore
The Comparison Pattern
For dashboards that answer "how are we doing vs. target/last year/competitors":
- Target vs. Actual bars or gauges
- Variance highlighting (green for positive, red for negative)
- Trend lines showing trajectory
The Investigation Pattern
For operational dashboards where users need to drill into problems:
- Filters prominently displayed
- Sortable tables with conditional formatting
- Drill-through links to detail pages
- Clear title that explains what decision this dashboard informs
- Key metric visible within 5 seconds without scrolling
- Context provided for all metrics (vs. target, vs. last period)
- Filters default to the most common use case
- Mobile-friendly layout for on-the-go viewing
- Refresh timestamp visible so users know data freshness
- Owner/contact information for questions
Build dashboards that answer specific business questions, not dashboards that display available data. Lead with the story, provide context, and make the key insight visible within 5 seconds.
QuickSight + AWS Data Stack Integration
The real power of QuickSight emerges when you integrate it into a modern AWS data stack. Here's how we've architected our analytics pipeline at Rectangle Health.
The Architecture
Data Ingestion Layer: Sources (APIs, DBs) → S3 Lake (Raw Data) → Glue (Catalog) → Redshift (Warehouse)
Analytics Layer: Airflow (Orchestration) → dbt (Models) → SPICE (Cache) → QuickSight (Dashboards)
S3 as the Foundation
Everything starts in S3. Raw data lands in our data lake in Parquet format (compressed, columnar — better for analytics than CSV or JSON). We partition by date so Athena and Glue can scan efficiently.
Our S3 structure:
s3://our-data-lake/
├── raw/
│ ├── transactions/year=2026/month=01/day=27/
│ ├── merchants/year=2026/month=01/day=27/
│ └── events/year=2026/month=01/day=27/
├── processed/
│ └── (dbt output lands here)
└── analytics/
└── (aggregated datasets for QuickSight)
Glue for Cataloging
AWS Glue crawlers scan our S3 buckets and maintain a metadata catalog. This catalog is shared across Athena, Redshift Spectrum, and QuickSight. When a new column appears in our data, Glue detects it automatically.
Use Glue Data Quality to validate data before it reaches QuickSight. We catch data anomalies (null spikes, schema changes) in Glue before they corrupt dashboards.
Redshift as the Query Engine
While QuickSight can query S3 directly via Athena, we route most analytics through Redshift. Why?
- Faster joins: Redshift's distributed MPP architecture handles complex joins better
- Materialized views: We pre-compute expensive aggregations
- Concurrency scaling: Handles dashboard query spikes automatically
- Integration with dbt: Our transformation layer lives in Redshift
We right-sized our Redshift cluster and compressed our Parquet partitions, trimming query time by 38% and saving $42k in annual AWS spend.
dbt for Transformation Logic
Our 40+ dbt models transform raw data into analytics-ready datasets. Each model is tested, documented, and version-controlled. When business logic changes, we update the dbt model — not individual dashboard queries.
The key insight: dbt models are the single source of truth. QuickSight datasets point to dbt-created views, not raw tables. This means metric definitions are consistent across all dashboards.
Airflow for Orchestration
Airflow ties everything together. Our DAG runs every 15 minutes:
- Check for new data in S3
- Run dbt models if data is fresh
- Trigger SPICE dataset refreshes via QuickSight API
- Alert Slack if any step fails
This is how we achieve 15-minute dashboard freshness with 99% pipeline success rate.
Automating SPICE Refreshes
QuickSight's scheduled refresh is fine for simple cases, but we needed more control. Using the QuickSight API, we trigger SPICE refreshes from Airflow after dbt models complete:
import boto3
quicksight = boto3.client('quicksight')
def refresh_dataset(dataset_id):
response = quicksight.create_ingestion(
DataSetId=dataset_id,
IngestionId=f'refresh-{datetime.now().isoformat()}',
AwsAccountId='YOUR_ACCOUNT_ID'
)
return response
This ensures dashboards refresh only after data is ready — no more partial refreshes or stale data.
QuickSight's value multiplies when integrated into a modern data stack. Use S3 for storage, Glue for cataloging, Redshift for complex queries, dbt for transformation logic, and Airflow for orchestration. This architecture enables our dashboards to refresh every 15 minutes reliably.
Real Project: Replacing 12 Excel Workbooks
Let me tell you about the project that convinced me QuickSight was worth the investment.
The Problem
When I joined Rectangle Health, critical business metrics lived in 12 Excel workbooks. Each workbook was owned by a different analyst or department. They shared a few characteristics:
- Manual updates: Someone had to download data, paste it, refresh pivots
- Version chaos: "Revenue_Final_v3_ACTUAL.xlsx" — you know the drill
- Formula fragility: One wrong paste and calculations broke silently
- No single source of truth: Marketing's revenue number didn't match Finance's
- No access control: Anyone with the file could change anything
The finance team spent roughly 10 hours per week maintaining these workbooks. More importantly, executives didn't trust the numbers — they'd seen too many "oops, that formula was wrong" corrections.
The Solution
We migrated all 12 workbooks to QuickSight over 8 weeks. Here's how:
Week 1-2: Discovery and Mapping
I met with each workbook owner to understand:
- What business questions does this workbook answer?
- What data sources feed it?
- Who consumes it, and how often?
- What are the known pain points?
The key discovery: 8 of the 12 workbooks were answering variations of the same questions. We consolidated them into 3 dashboards.
Week 3-4: Data Pipeline Construction
We built dbt models to replicate the workbook logic. Every Excel formula became a documented, tested SQL transformation. When business logic was ambiguous ("how do we define active merchant?"), we documented the decision and got sign-off.
Week 5-6: Dashboard Development
We built the QuickSight dashboards, focusing on the 5-second rule. Each dashboard opened with the key metric, with drill-down details below.
Critical feature: we added a "Data Updated" timestamp to every dashboard. Users could see exactly how fresh the data was — building trust that the old Excel workflow couldn't provide.
Week 7-8: Training and Transition
We ran training sessions with all stakeholders. We kept the old Excel workbooks running in parallel for two weeks, comparing outputs. When discrepancies appeared (they did — twice), we traced them to Excel formula errors that had been silently producing wrong numbers.
The Results
The 25% data trust improvement came from a simple internal survey. Before: "I sometimes question whether the numbers are right." After: "I trust the dashboards to be accurate."
But the real win was cultural. When executives started sharing dashboard links in meetings instead of asking analysts to pull numbers, we knew the transformation had worked.
The Excel workbooks weren't just a technical problem — they were a trust problem. QuickSight gave us automated refreshes and version control, but the real value was that everyone finally agreed on one source of truth.
Replacing Excel workbooks isn't just about automation — it's about establishing a single source of truth. The time savings are real, but the trust improvement is more valuable. Document your business logic, test your transformations, and run parallel comparisons before cutting over.
SPICE vs Direct Query: When to Use Each
One of the most common questions I get: should I use SPICE or Direct Query for this dataset?
- + Sub-second query performance regardless of dataset size
- + Protects source databases from dashboard query load
- + Enables offline dashboard viewing (data cached)
- + Predictable costs based on storage, not query volume
- + Faster dashboard loading for end users
- − Data is only as fresh as the last refresh
- − 1 billion row / 1 TB limit per dataset
- − Storage costs for large datasets ($0.38/GB/month)
- − Refresh failures can cause stale data
- − Initial load time for very large datasets
- + Always real-time data freshness
- + No row limits (constrained by source)
- + No SPICE storage costs
- + Good for datasets that change constantly
- + Simpler architecture (no refresh scheduling)
- − Query performance depends on source database
- − Heavy dashboard usage impacts source systems
- − Slower dashboard interactions for complex queries
- − Users may see timeouts for expensive queries
- − Costs scale with query volume, not storage
My Decision Framework
Use SPICE when:
- Data freshness of 15-60 minutes is acceptable
- Dashboard will have many concurrent users
- Source database performance is a concern
- Dataset is under 1 billion rows / 1 TB
- Queries involve complex calculations or aggregations
Use Direct Query when:
- Real-time data is required (trading dashboards, live ops)
- Dataset is too large for SPICE limits
- Data changes constantly throughout the day
- You're querying a high-performance source (Redshift, Athena)
- Dashboard has few users with simple queries
The Hybrid Approach
For many use cases, I use both. At Rectangle Health:
- SPICE for executive dashboards (refreshed every 15 min, need to be fast)
- Direct Query for operational dashboards (real-time merchant status)
- SPICE for historical analysis (last 2 years of data, rarely changes)
- Direct Query for ad-hoc exploration (analysts querying Redshift directly)
You can have multiple datasets pointing to the same source — one in SPICE for fast historical queries, one in Direct Query for real-time views. Use the right tool for each use case.
SPICE for performance and protection, Direct Query for real-time needs. Most organizations benefit from a hybrid approach — fast SPICE dashboards for executives, real-time Direct Query for operations.
Common Mistakes in QuickSight Implementations
After building multiple dashboards, I've seen (and made) plenty of mistakes. Here are the ones that hurt most:
QuickSight Implementation Mistakes
- Loading raw transaction data into SPICE instead of pre-aggregating — leads to slow refreshes and high costs
- Skipping data modeling — building dashboards directly on raw tables instead of using dbt or views
- Ignoring row-level security until production — retrofitting RLS is painful
- Not monitoring SPICE refresh failures — users see stale data and lose trust
- Over-engineering dashboards — 50 visuals on one page that nobody can interpret
- Forgetting mobile users — executives check dashboards on phones, design accordingly
- No naming conventions — 'Dataset 1', 'Copy of Revenue Dashboard' becomes chaos at scale
Mistake Deep Dive: Skipping Data Modeling
This is the most expensive mistake. I've seen teams build dashboards directly on raw database tables, embedding business logic in QuickSight calculated fields.
The problems compound:
- Inconsistent metrics: Each dashboard defines "revenue" slightly differently
- Slow dashboards: Complex calculations run on every interaction
- Hard to maintain: Changing a business rule means editing 20 dashboards
- No testing: QuickSight calculated fields can't be unit tested
The fix: Use dbt (or similar) to create a transformation layer. Define metrics once, test them, document them. QuickSight datasets point to dbt models, not raw tables.
Mistake Deep Dive: Ignoring SPICE Refresh Monitoring
SPICE refreshes can fail. Network issues, source database timeouts, schema changes — many things can go wrong. If you don't monitor refreshes, users see stale data without knowing it.
Our solution:
- Airflow monitors every refresh — alerts Slack on failure
- Dashboards show "Last Updated" timestamp — users see data freshness
- CloudWatch alarms — trigger if refresh hasn't succeeded in expected window
- Weekly SPICE health report — shows refresh success rates by dataset
Nothing destroys dashboard trust faster than stale data without warning. Add visible timestamps and monitor your refreshes religiously.
Most QuickSight implementations fail due to poor data modeling and lack of monitoring, not QuickSight limitations. Invest in a proper transformation layer (dbt) and monitor SPICE refreshes like you monitor production systems.
QuickSight vs Power BI vs Tableau
People ask me this constantly. Here's my honest comparison based on production experience with all three.
When to Choose QuickSight
- Your data lives primarily in AWS (S3, Redshift, Athena)
- You have many occasional dashboard users (per-session pricing)
- You want embedded analytics without extra licensing
- You need serverless, fully managed infrastructure
- You're already investing in AWS skills
When to Choose Power BI
- Your organization is Microsoft-heavy (Azure, M365, Teams)
- You need the most advanced visualization options
- Self-service BI for business users is a priority
- You want the strongest community and training resources
- Budget allows for per-user pricing
When to Choose Tableau
- Visualization quality and flexibility are paramount
- You have complex, exploratory analytics needs
- Users need maximum self-service capabilities
- You're in a data-mature organization with analyst teams
- Budget is less constrained
There's no universally "best" BI tool. QuickSight is best when your data lives in AWS and you have variable dashboard usage. Power BI is best in Microsoft environments. Tableau is best when visualization flexibility matters most.
Choose your BI tool based on your data platform, usage patterns, and team skills — not vendor marketing. QuickSight excels in AWS-native environments with its pay-per-session model. Power BI dominates Microsoft shops. Tableau offers the most visualization power.
Key Takeaways: AWS QuickSight Success
- 1QuickSight's real power is SPICE and AWS-native integration — not just dashboarding
- 2SPICE protects your production databases while enabling fast dashboard interactions — invest time in optimization
- 3Build dashboards that answer specific business questions — lead with the story, not the data
- 4Integrate QuickSight into a modern data stack (S3 → Glue → Redshift → dbt → Airflow → SPICE)
- 5Replace Excel workflows systematically — document logic, test transformations, run parallel comparisons
- 6Monitor SPICE refreshes like production systems — stale data destroys trust
- 7Choose QuickSight when your data lives in AWS and you have variable dashboard usage patterns
Frequently Asked Questions
What is AWS QuickSight?
AWS QuickSight is Amazon's cloud-native, serverless business intelligence service. It enables you to create interactive dashboards, perform ad-hoc analysis, and share insights across your organization. Key features include SPICE (in-memory caching), native AWS integrations, embedded analytics, and pay-per-session pricing.
How much does AWS QuickSight cost?
QuickSight offers flexible pricing. Authors cost $24/month ($40 for Author Pro with AI features). Readers cost $3/month per user ($20 for Reader Pro). For embedded analytics, capacity pricing offers bulk sessions starting at $250/month for 500 sessions. SPICE storage costs $0.38/GB/month. The low per-user Reader cost makes QuickSight accessible for organization-wide analytics.
What is SPICE in AWS QuickSight?
SPICE (Super-fast, Parallel, In-memory Calculation Engine) is QuickSight's in-memory data store. When you import data into SPICE, it's cached in AWS infrastructure, enabling sub-second query performance without hitting your source databases. SPICE supports up to 1 billion rows per dataset (with 1 TB size limit) and costs $0.38/GB/month.
Should I use SPICE or Direct Query?
Use SPICE when 15-60 minute data freshness is acceptable and you want fast dashboards that don't impact source systems. Use Direct Query when you need real-time data or your dataset exceeds SPICE limits. Most organizations benefit from a hybrid approach — SPICE for executive dashboards, Direct Query for operational views.
Is AWS QuickSight better than Power BI?
It depends on your environment. QuickSight excels with AWS-native data sources (S3, Redshift, Athena) and offers flexible per-session pricing. Power BI is stronger in Microsoft environments and offers more visualization options. Choose based on your data platform, usage patterns, and team skills.
How do I refresh SPICE datasets automatically?
QuickSight supports scheduled refreshes through the console (hourly, daily, weekly). For more control, use the QuickSight API to trigger refreshes from orchestration tools like Airflow. This lets you refresh SPICE only after upstream data pipelines complete, ensuring data consistency.
Can QuickSight handle large datasets?
SPICE supports up to 1 billion rows (or 1 TB) per dataset in Enterprise edition. For larger datasets, you can pre-aggregate data before loading, partition into multiple datasets, or use Direct Query to source systems. Redshift and Athena can handle petabyte-scale data accessed via Direct Query.
Sources & References
- Amazon QuickSight Documentation — Amazon Web Services (2026)
- Amazon QuickSight Pricing — Amazon Web Services (2026)
- SPICE Data in Amazon QuickSight — Amazon Web Services Documentation (2026)
- Building a Data Analytics Pipeline on AWS — AWS Architecture Blog (2025)
- dbt Documentation — dbt Labs (2026)
- Apache Airflow Documentation — Apache Software Foundation (2026)
- AWS Glue Developer Guide — Amazon Web Services (2026)
- Amazon Redshift Best Practices — Amazon Web Services (2026)