secondme project constitution v0
role: canonical doctrine
use this page when you need the stable answer to:
- what secondme is
- who it is for
- what the first wedge is
- what principles and non-goals guide decisions
if this page conflicts with a dashboard or a snapshot page, this page wins.
one-screen summary
secondmeis a private chief-of-staff operating layer for high-context principals.- the core bottleneck is not lack of intelligence; it is fragmented context, weak compression, bad timing, and broken follow-through.
- the first wedge is read-only onboarding through one high-context source, starting with telegram.
- the first value is time to first insight; the first wow is tailored leverage already in motion.
- the first user filter is not
hnwalone; it iscomplexity x stakes x agency. - the moat thesis is not raw model novelty; it is harness quality, memory quality, trust tuning, and continuity over time.
- we are not a research lab; we productize proven patterns, tools, and harnesses that work now.
mission, north star, horizon
mission
preserve and extend continuity of agency for high-context principals.
north star
the user feels:
- i do not need to reconstruct my situation from scratch
- the system found the next best intervention before i asked
- it can act, but it knows when to stop
horizon
- short: win one workflow so strongly that users pull for it repeatedly
- medium: become the private operating layer across communications, planning, memory, and approvals
- long: become the chief-of-staff substrate around a principal, where human and ai operators inherit continuity instead of rebuilding it
why now
the timing claim is simple:
architecture around the model is improving faster than most products are adapting to it.
that matters because:
- harness quality can create order-of-magnitude gains over naked model use on hard tasks
- memory systems and context pipelines are now composable enough for product use
- raw model access is commoditizing, which increases the value of workflow shape, trust, and continuity
weak point:
- benchmark wins do not automatically transfer into our workflow; they must be replicated on principal tasks
who this is for
true filter
- complexity
- stakes
- agency
likely early users
- founders with multiple live fronts
- investors and portfolio operators
- wealth-adjacent principals with fragmented high-stakes context
- operators who already feel the cost of dropped continuity
core pain
the recurring failure mode is:
- context lives across chats, docs, notes, calendars, tasks, and people
- the principal repeatedly reconstructs the same situation by hand
- timing slips and follow-through degrades
- delegated work loses context and identity
- assistants sound helpful but fail the trust test
why current agent tooling is not enough
- it still feels session-shaped and operator-heavy
- memory continuity is weak unless someone keeps curating it
- cross-context synthesis exists as craft, not as a stable product loop
- maintenance burden is still high enough to kill adoption
market wedge, first value, first wow, v1
market wedge
read-only onboarding through one high-context source, starting with telegram, for one principal.
first value
time to first insight: the system quickly reflects the user's real situation back from live context with near-zero setup.
first wow
the system explains the user's world back to them, shows where it can create leverage in their exact contexts, and already has 1-2 bounded workstreams in motion.
the user should receive:
- what the system clearly sees about the user's active reality
- where time, attention, or follow-through are leaking
- what long-range goals and bottlenecks are visible
- how the user should work with the system
- what safe workflows, tools, or artifacts are already being prepared
v1 scope
the minimum system to produce that outcome likely includes:
- read-only onboarding from 1 high-context source first, expanding to 2-3 as trust grows
- inspectable memory with promoted summaries
- one principal profile with priorities, preferences, and active campaigns
- self-brief plus capability briefing
- at least one bounded safe workstream already in motion
- briefing generation on a repeatable cadence once the first source is understood
- approval boundary for all meaningful outbound actions
- prepared artifacts such as a relationship map, crm seed, open-loop tracker, draft, or recommendation
- outcome logging so the next cycle starts from updated reality
product invariants
- the harness is the product
- scaffolding > model
- chief of staff > chatbot
- inspectable memory > opaque magic
- continuity quality > response quality
- trust is part of utility
- proactivity is required
- one correct intervention > twenty plausible suggestions
- small adaptive wins > giant demos
- self-improvement must happen through observed outcomes, not fantasy self-critique
- build from proven working components before inventing net-new science
design principles
product
- start with one painful workflow, not category breadth
- optimize for approval-ready action, not insight theater
- reduce maintenance overhead aggressively
- earn more agency through repeated small wins
system
- prefer proven architectures over frontier novelty by default
- use existing strong harnesses and patterns as building blocks
- memory should be inspectable, layered, and promotable
- orchestration should be policy-aware, not just state-aware
- permissions and blast radius shape architecture from day 1
operating
- doctrine is slow; bets are fast
- evidence outranks eloquence
- every bet needs a next test and a kill condition
- raw live channels do not update doctrine directly
self-improvement loop
the system should improve by closing loops on real outcomes:
- propose a bounded intervention
- route through the right approval boundary
- observe what the user accepted, rejected, or edited
- write back preference, outcome, and confidence signals
- adjust future action selection, not just future wording
trust and agency ladder
day 1
- ingest
- summarize
- prepare drafts
- recommend next actions
after proof
- schedule or execute reversible low-risk actions with approval
- operate on scoped channels with strong audit trail
after deep trust
- higher-frequency delegation
- broader channel coverage
- narrower approval loops for pre-cleared action classes
business model posture
the business model should follow the trust ladder:
- low-friction entry through obvious proof of power
- paid layer for security, reliability, support, and managed setup
- increasing value through deeper continuity, not feature count
weak point:
- pricing evidence is still weak
proven harness map
| harness or source | why it matters | current posture |
|---|---|---|
| arcgentica / symbolic-ai style orchestrator harness | architecture can massively outperform naked model use on hard tasks | inspiration + proof trigger; replicate locally |
| supermemory-like memory layer | points toward inspectable, retrieval-ready, memory-native systems | candidate building block |
| opencode / codex-style operator loop | proves real leverage from tool-using agents in bounded environments | useful substrate, not the full product |
| secondme source-ingestion and promotion pipeline | turns live chat into inspectable memory instead of doctrine noise | core internal harness |
inspiration map
| source | strongest contribution |
|---|---|
| egor rudi conversation | buyer truth, time to first insight, quick wins, situational awareness, capability briefing |
| daniel miessler family | category language, scaffolding > model, personal ai infrastructure |
| mitchell levin pack | control logic, active memory, setpoints, perturbation-first evaluation |
active bets
- context management is the core product, not one feature among many
- the first wow is time to first insight plus initiated leverage, not briefing alone
- the wedge is
complexity x stakes x agency, nothnwalone - trust requires a ladder, not one deployment mode
- harness quality matters more than raw model delta for this workflow
non-goals
- being a general research lab
- building a generic multi-agent platform before one sharp workflow wins
- chasing model novelty as a strategy
- autonomy without clear permissions
- category language inflation without new field evidence
doctrine update rule
this document should change only when repeated field evidence, promoted sources, or frozen workflow decisions actually change the map.