Transitioned from Plan to Build mode

Replit-Commit-Author: Agent
Replit-Commit-Session-Id: 279f1558-c0e3-40e4-8217-be7e9f4c6eca
Replit-Commit-Checkpoint-Type: full_checkpoint
Replit-Commit-Event-Id: 9d61b0f8-bf5a-400f-9764-0c614b540610
Replit-Commit-Screenshot-Url: https://storage.googleapis.com/screenshot-production-us-central1/b984cb14-1d19-4944-922b-bc79e821ed35/279f1558-c0e3-40e4-8217-be7e9f4c6eca/bgcvGPx
Replit-Helium-Checkpoint-Created: true
This commit is contained in:
sirpiglr 2025-12-17 02:10:03 +00:00
parent 78cdc7e184
commit 091461a865
2 changed files with 321 additions and 0 deletions

View file

@ -0,0 +1,321 @@
Clarify goal
Do you want to:
Design a users/profiles schema linked to Supabase Auth?
Fix an existing users/profiles setup?
Add RLS policies for profiles?
Write queries or build an Edge Function around users/profiles?
If you share your intent, Ill tailor the exact SQL and policies. Meanwhile, heres a solid, production-ready baseline you can adopt.
Recommended schema pattern
Auth users live in auth.users (managed by Supabase Auth).
Your public profile table references auth.users.id, stores app-specific fields, and has RLS enabled.
Example schema and trigger to auto-provision profiles on signup:
SQL Query
-- 1) Profiles table
create table if not exists public.profiles (
id uuid primary key references auth.users(id) on delete cascade,
username text unique,
full_name text,
avatar_url text,
bio text,
website text,
created_at timestamptz default now() not null,
updated_at timestamptz default now() not null
);
-- 2) Keep updated_at fresh
create or replace function public.set_updated_at()
returns trigger
language plpgsql
as $$
begin
new.updated_at = now();
return new;
end;
$$;
drop trigger if exists set_profiles_updated_at on public.profiles;
create trigger set_profiles_updated_at
before update on public.profiles
for each row execute function public.set_updated_at();
-- 3) Enable RLS
alter table public.profiles enable row level security;
-- 4) RLS policies: self-read and self-write, plus public read if you want
do $$
begin
-- Read own profile
if not exists (
select 1 from pg_policies
where schemaname = 'public' and tablename = 'profiles' and policyname = 'read_own_profile'
) then
create policy read_own_profile
on public.profiles
for select to authenticated
using ((select auth.uid()) = id);
end if;
-- Optionally allow public read of basic fields (comment out if private)
if not exists (
select 1 from pg_policies
where schemaname = 'public' and tablename = 'profiles' and policyname = 'read_profiles_public'
) then
create policy read_profiles_public
on public.profiles
for select to anon, authenticated
using (true);
end if;
-- Update own profile
if not exists (
select 1 from pg_policies
where schemaname = 'public' and tablename = 'profiles' and policyname = 'update_own_profile'
) then
create policy update_own_profile
on public.profiles
for update to authenticated
using ((select auth.uid()) = id)
with check ((select auth.uid()) = id);
end if;
-- Insert restricted: only the signup trigger should insert
if not exists (
select 1 from pg_policies
where schemaname = 'public' and tablename = 'profiles' and policyname = 'insert_by_service_only'
) then
create policy insert_by_service_only
on public.profiles
for insert to service_role
with check (true);
end if;
end$$;
-- 5) Auto-create a profile on user signup
create or replace function public.handle_new_user()
returns trigger
language plpgsql
security definer
set search_path = public
as $$
begin
insert into public.profiles (id, username)
values (new.id, split_part(new.email, '@', 1))
on conflict (id) do nothing;
return new;
end;
$$;
drop trigger if exists on_auth_user_created on auth.users;
create trigger on_auth_user_created
after insert on auth.users
for each row execute function public.handle_new_user();
-- 6) Helpful indexes
create index if not exists idx_profiles_username on public.profiles (username);
Notes:
insert_by_service_only ensures clients dont create arbitrary profiles; the trigger creates them.
If you prefer not to expose profiles publicly, remove read_profiles_public and rely only on read_own_profile, or replace with a policy that limits fields via a view.
Optional: public read via a view
If you want to hide some columns publicly:
SQL Query
create or replace view public.public_profiles as
select id, username, full_name, avatar_url
from public.profiles;
alter view public.public_profiles owner to postgres;
alter default privileges in schema public grant select on tables to anon, authenticated;
Add RLS to the base table only; views inherit it but you can also create dedicated policies if you turn the view into a security-barrier view or materialize logic differently.
Client usage examples
Get the current users profile:
const
{ data, error } =
await
supabase
.
from
(
'profiles'
)
.
select
(
'*'
)
.eq(
'id'
, (
await
supabase.auth.getUser()).data.user.id)
.single()
Update own profile:
const
{ data, error } =
await
supabase
.
from
(
'profiles'
)
.update({ full_name:
'Ada Lovelace'
, avatar_url:
'https://...'
})
.eq(
'id'
, (
await
supabase.auth.getUser()).data.user.id)
.
select
()
.single()
Variations I can provide
Organization/multi-tenant profiles (link users to org_memberships and scope read/write).
Username reservation rules, case-insensitive unique constraint using citext.
Soft delete with deleted_at handling in RLS.
Storage bucket policy to restrict avatar uploads per user.
Tell me which variant you need, or share your current schema, and Ill generate precise SQL and policies.
MrPiglr
How to extract entire DB for replit to read? Issue is focused around user tbale and profiles
Reasoned
Fetching current tables, extensions, and edge functions to understand your setup before suggesting the best export path.
Ran list_tables
Reasoned
Goal
Export your Supabase database so Replit can read it, with emphasis on auth.users and public.profiles.
What you have
Auth users in auth.users (21 rows)
App profile tables: public.profiles (21 rows) and public.user_profiles (9 rows)
A separate custom public.users table (non-auth) with sessions/refresh_tokens linked to it
Because you have multiple “profile/user” tables, decide which you need in Replit:
If you want to mirror Supabase Auth: export auth.users + public.profiles (+ any related tables you need).
If youre building a standalone app without Supabase Auth: you might export public.users and any app tables linked to it.
Common export paths
SQL dump (schema + data)
Best for recreating the DB structure and content locally on Replit with Postgres/MySQL-compatible services (Replit PostgreSQL, Docker Postgres, or a managed addon).
How:
Using Supabase CLI (locally):
supabase db dump --data-only > data.sql
supabase db dump --schema-only > schema.sql
Or a full dump: supabase db dump > full.sql
Or via Dashboard: Database > Backups or Table Editor export (CSV per table).
On Replit side with Postgres:
psql < schema.sql
psql < data.sql
Pros: precise types/constraints preserved.
Cons: Replit must have Postgres or a service you can connect to.
CSV export (per table)
Works if youll use a lightweight DB (e.g., SQLite) or load CSVs into an embedded DB in Replit.
For user-focused data:
Export auth.users (exclude sensitive columns if you dont need them).
Export public.profiles, public.user_profiles.
Load in Replit using your chosen DB library.
Programmatic export (SQL-to-JSON)
If you cant use CLI/CSV, you can SELECT and serialize rows to JSON, then import in Replit.
Example key queries to run/export (non-executable examples):
SQL Query
-- Core
SELECT id, email, created_at, updated_at FROM auth.users;
SELECT * FROM public.profiles;
SELECT * FROM public.user_profiles;
-- If using custom public.users (non-Supabase auth)
SELECT * FROM public.users;
SELECT * FROM public.sessions;
SELECT * FROM public.refresh_tokens;
Save JSON files and import in Replit.
Recommended for your setup
A) If Replit app will not use Supabase Auth:
Export schema + data for only the tables you need:
Choose either public.profiles or public.user_profiles as your canonical profile table (public.profiles is the richer one and linked across many tables).
If you rely on the custom public.users auth, export public.users, public.sessions, public.refresh_tokens as well.
Steps:
Use Supabase CLI to dump selected tables:
supabase db dump --data-only --schema public --table profiles --table user_profiles --table users --table sessions --table refresh_tokens > data.sql
supabase db dump --schema-only --schema public > schema_public.sql
In Replit with Postgres:
psql < schema_public.sql
psql < data.sql
B) If Replit app will use Supabase Auth (recommended if you want to keep auth parity):
Replit cannot host Supabase Auth; youd still call Supabase Auth over HTTP from Replit, and only mirror the public.profiles table locally if needed for reads/testing.
Export only public.profiles (and related lookups) to Replit if you need local dev data. Keep auth.users in Supabase.
Minimizing sensitive data
auth.users contains hashed passwords/tokens. If you dont need them, export a sanitized view:
Create a view or SELECT that excludes encrypted_password, tokens, phone, etc.
Export the view results as CSV/JSON.
Example sanitized export query (illustrative):
SQL Query
SELECT id, email, created_at, updated_at, raw_user_meta_data
FROM auth.users;
Practical commands (Supabase CLI)
Full schema+data:
supabase db dump > full.sql
Only public schema (schema+data):
supabase db dump --schema public > schema_public.sql
supabase db dump --data-only --schema public > data_public.sql
Only specific tables (data):
supabase db dump --data-only --schema public --table profiles --table user_profiles > data_profiles.sql
Only auth schema (be cautious with sensitive columns):
supabase db dump --schema auth > schema_auth.sql
supabase db dump --data-only --schema auth --table users > data_auth_users.sql
Consider post-processing to strip sensitive fields before sharing/importing.
Then in Replit with a Postgres instance:
psql "$DATABASE_URL" < schema_public.sql
psql "$DATABASE_URL" < data_profiles.sql
If youre not using Postgres in Replit, export to CSV from the Dashboard for the specific tables, then import with your chosen DB or in-memory store.
What do you want to include?
Tell me:
Which style you prefer (SQL dump vs CSV vs JSON)?
Which user system are you mirroring on Replit? public.users or Supabase Auth users?
Which profile table(s) you want: public.profiles, public.user_profiles, or both?
Ill generate exact export commands or safe SELECTs for sanitized CSV/JSON and, if needed, a minimal schema file tailored for Replit.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

After

Width:  |  Height:  |  Size: 48 KiB