Worked with more than 100 people

CORE COMPETENCIES & SKILLS

Annotation & Evaluation

• Image annotation: bounding boxes, segmentation, keypoints, and object classification for training datasets.
• Text annotation: classification, entity tagging, sentiment labeling, and intent mapping for NLP tasks.
• Audio/Video labeling: transcription review, timestamping, and speaker cues for multimodal AI training.
• Model evaluation: human-in-the-loop assessments, scoring outputs against rubrics and gold-standard references.

01

AI Training Data Operations

• Dataset readiness: validate labeling completeness, schema consistency, and format requirements (CSV/JSON).
• Prompt & response evaluation: review AI outputs for correctness, relevance, clarity, and safe behavior.
• Error pattern analysis: identify recurring failure modes and recommend guideline improvements or relabeling.
• Production discipline: maintain documentation, change logs, and clean handoff notes for scalable AI workflows.

02

Analytical & Quality Assurance

• Guideline adherence: proven ability to interpret and execute long, complex annotation instructions.
• Edge-case judgment: consistent, defensible decisions on ambiguous cases.
• QA & self-audit: systematic review process to detect and correct errors pre-submission.
• Metrics-driven: focus on accuracy, consistency, throughput, and inter-annotator agreement.

03

HIGHLIGHTED PROJECT EXPERIENCE

Personal Project — Image Annotation for E-commerce | Jan 2025

• Performed bounding-box annotation on 200 fashion product images using Labelbox-style workflows, ensuring clean labels for training-ready computer vision datasets.
• Wrote a mini-guideline defining label classes, occlusion handling, and size thresholds, including edge-case examples to improve annotation consistency.
• Self-audited the dataset and achieved 99.5% internal accuracy; reduced ambiguous labels by 78% after rule clarifications and QA rechecks.

01

Text Classification — News Sentiment & Topic Tagging (Personal QA) | Mar 2024

• Labeled 1,000+ Indonesian-English news snippets for sentiment (pos/neg/neutral) and topic taxonomy, prioritizing clarity and consistent labeling decisions.
• Defined edge-case rules for sarcasm, mixed-topic signals, and misleading framing, improving inter-annotator agreement from 0.62 to 0.88.
• Added safety-focused checks to flag harmful, biased, or toxic language patterns, strengthening dataset reliability for AI training and evaluation.

02

FREELANCE EXPERIENCE

Freelance Microtasks (Upwork / Microtask Platforms) — Ongoing

• Completed moderation, relevance labeling, and classification HITs with consistent accuracy and reliable on-time delivery.
• Followed task rubrics strictly, resolving ambiguous cases with consistent judgment and clean submission formatting.
• Maintained steady productivity across repetitive workflows while keeping quality stable under tight time constraints.

01

Freelance Data Annotator & Quality Rater — Remote | 2024–Presen

• Delivered high-quality labels across image, text, and audio tasks using strict guideline compliance and consistency checks.
• Communicated unclear instructions to project leads and applied practical resolutions to reduce repeated labeling errors.
• Performed batch QA reviews and produced short QA notes documenting common issues, fixes, and prevention rules.

02

AI Evaluator (Microtasks) — Remote | 2024–Present

• Evaluated model outputs using scoring rubrics, rating correctness, relevance, completeness, clarity, and usefulness of responses.
• Flagged unsafe, biased, misleading, or policy-sensitive outputs and wrote concise evaluator notes to support model alignment.
• Identified recurring failure patterns (hallucinations, instruction misses, tone issues) and suggested improved edge-case training examples.

03

SELECTED ACHIEVEMENTS & METRICS

• Personal dataset annotation: 200+ images (bounding boxes) with documented guideline and 99.5% self-audited accuracy.
• Improved inter-annotator agreement in a pilot text project from 0.62 → 0.88 after clarifying edge-case rules.
• Steady client satisfaction on freelance platforms (average rating 4.8/5 on microtasks — present these on request).

01

EDUCATION

Bachelor of Science, Industrial Engineering — Indonesia Computer University | 2011

01

LANGUAGE & COMMUNICATION

• English: Advanced reading/writing for technical guidelines and QA notes.
• Bahasa Indonesia: Native speaker; strong cultural/contextual knowledge for local content annotation.

01

AVAILABILITY & WORKING STYLE

• Remote, flexible hours (comfortable with fixed-shift and asynchronous workflows).
• Strong timezone discipline, daily progress reporting, and commitment to NDAs and data confidentiality’

01