KYCIS SDK Docs
Core Concepts

Assistant Entry UX

Voice assistant entry points and UI patterns

Entry Patterns

1. Manual Entry (Host App Controlled)

User taps explicit support button:

// In your activity/fragment
helpButton.setOnClickListener {
  AI.startAssistant()
}

Requirements:

  • RECORD_AUDIO permission granted
  • SDK initialized
  • User has been set (optional but recommended)

2. Passive/Triggered Entry (SDK Controlled)

SDK evaluates triggers and shows confirmation:

// In Application.onCreate
AI.init(
  ...,
  policy = RuntimePolicy(
    triggerStartMode = TriggerStartMode.CONFIRM_UI,
    confirmUiText = ConfirmUiText(
      title = "Need help completing this step?",
      startCta = "Start",
      dismissCta = "Not now"
    )
  )
)

Flow:

  1. SDK detects struggle (errors, time, idle)
  2. Backend evaluates trigger policy
  3. SDK shows confirmation dialog
  4. User confirms → voice session starts

3. Immediate Entry

For urgent help scenarios:

AI.init(
  ...,
  policy = RuntimePolicy(
    triggerStartMode = TriggerStartMode.IMMEDIATE
  )
)

No confirmation UI — assistant starts directly when triggered.

Voice Session Listener

Receive LiveKit credentials:

AI.setVoiceSessionListener { result ->
  if (result.isValid) {
    // Connect to LiveKit
    connectToLiveKit(
      url = result.livekitUrl,
      token = result.token,
      room = result.livekitRoom
    )
  } else {
    // Handle error
    Log.e("KYCIS", "Voice session failed: ${result.error}")
  }
}

Confirmation UI

SDK-owned overlay (not host-app UI):

ConfirmUiText(
  title = "Need help?",
  message = "Our assistant can guide you",
  startCta = "Talk now",
  dismissCta = "Maybe later"
)

Dismissal behavior:

  • Resets passive tracker via onInteraction()
  • Prevents immediate re-trigger
  • Respects cooldown settings

FAB / Floating Button Patterns

Optional voice FAB designs provided:

  • BasicVoiceFabDesign — simple microphone icon
  • PremiumVoiceFabDesign — animated, branded

Integration in Compose:

VoiceFabHost(
  design = PremiumVoiceFabDesign(),
  onClick = { AI.startAssistant() }
)

Permission Flow

SDK handles permission requests:

  1. Before startAssistant(), SDK checks RECORD_AUDIO
  2. If needed, requests permission:
    • Uses PermissionRequestFragment on FragmentActivity
    • Falls back to ActivityCompat.requestPermissions
  3. Result forwarded via AI.onRequestPermissionsResult()

Best Practices

  1. Always provide manual entry — don't rely solely on triggers
  2. Clear button positioning — FAB or toolbar, always visible
  3. Handle permission denial gracefully — explain why mic is needed
  4. Stop on user end — call AI.stopAssistant() when user leaves
  5. Respect dismissals — don't immediately re-prompt

Session Lifecycle

User clicks help / Trigger fires

AI.startAssistant()

Request permission if needed

POST /v1/assistant/session/start

VoiceSessionListener receives credentials

App connects to LiveKit

User talks with agent

User ends call / App calls AI.stopAssistant()

POST /v1/assistant/session/stop

Reengagement activity emitted

Troubleshooting

IssueCauseFix
FAB not visibleMissing permissionAdd RECORD_AUDIO to manifest
Cannot start voiceBackend not readyCheck /health endpoint
No trigger popupTrigger disabledCheck backend feature flags
Immediate dismissalCooldown activeWait for cooldown period

On this page