Tigerfish Data Labeling. The human touch.

In business since 1989, we’ve learned a few things. Accuracy. Meeting a deadline. How to decipher and optimize complex projects. 

Here’s our secret

There’s no substitute for experience. Everything we know we’ve learned on the job. Assembling the team. Assigning the tasks. Delivering results.

Put our experience to work for you. You’ll have the benefit of a personal project manager, data labelers with expertise in the field, and a customizable platform.

Tell us what you need and we’ll get it right.

You guys rock! Thank you!

SS, Apple.com

I love Tigerfish.

AC, Vice President, BlackRock

Resourceful, dependable, intelligent, friendly.

What's at the heart of Tigerfish? A team of brilliant, task-oriented, friendly people devoted to careful listening, open communication, and solving problems creatively.

Adam Goldberg

Adam

Adam Goldberg
Founder & President
Lydia Chen

Lydia

Lydia Chen
Director Production
Nik So

Nik

Nik So
Director New Technology
Steven Fullname

Steven

Steven Fullname
Steven's Position
Chi Le

Chi

Chi Le
Director Operations

The Tigerfish Story
At first, Tigerfish was a one person operation...

The Neighborhood

John Coltrane and Thelonious Monk played around the corner. Joe DiMaggio married Marilyn Monroe in the local cathedral. Francis Ford Coppola edited The Godfather on our street. Lenny Bruce performed at the Purple Onion. Allen Ginsberg and Bob Dylan hung out at City Lights Books. Ellen DeGeneres got her start here. Robin Williams worked new material a few blocks away. Humphrey BogartLauren BacallAlfred HitchcockJimmy StewartClint Eastwood, and Woody Allen filmed here.

And for over thirty years San Francisco’s North Beach has been home to Tigerfish

Call us up. There are some people we’d like you to meet.

I have come to expect excellent service and a great value from Tigerfish over the years. I’m pleased with the ease of use and the quick turnaround. Overall, I remain really pleased.

MB Hass
Jr. Fund

Tigerfish.  The word on time.â„¢

Attention to detail, rock solid infrastructure, exceptional reliability -- that's the vision that guided Adam Goldberg when he started Tigerfish thirty-five years ago. Since then, the company's growth has embodied Adam's principle that people excel in a friendly, supportive environment. After all, the secret to great work is great teamwork.

Trust Tigerfish for

  • Expert Annotators
  • Collaborative Guideline Development
  • Strict Data Security & Confidentiality
  • Multi-Layered Quality Control
  • Unwavering Attention to Detail
  • Cutting-Edge Methodology
  • Customized Templates
  • Statistical Validation

With over three decades in business, Tigerfish is on it.
Friendly service, precise labeling, and the right people for the job.

Alignment with reality is your stock in trade. Make sure you get it right.

Trust Tigerfish for

  • Claim Validation. We interpret and assess information from multiple sources to enhance nuance, and resolve conflicting information.
  • Structured Verification Protocols. We check citations and flag unsupported or contradicted claims.
  • Clear Reporting. We highlight specific factual errors, missing support, or contradictory evidence.
  • Meticulous Verification. Get the benefit of 35 years of fine-tuning.

"Everything went incredibly well. Communication with your office was effortless. Thanks for everything!

GM
Camp Creative

Get the tone right. Everything else will follow. 

Trust Tigerfish for

  • Expertise in Linguistic Nuance. We understand style and persona.
  • Creative Task Design. Clear and comprehensive instructions guide Tigerfish annotators. 
  • Brand Alignment. Understanding brand and voice enables us to create actionable criteria.
  • Feedback Cycles. Iterative feedback allows us to refine model attributes.
  • Attribute-Specific Rater Pools. We select and train evaluators with specific expertise (humor, specific cultural contexts, etc.).

Amazing work! I'm impressed with your research and thoroughness.

JL
Google

A useful helper is honest, harmless, and concise. 

Trust Tigerfish for

  • Superior Guideline Interpretation. Tigerfish evaluators apply nuanced guidelines, leading to reliable preference signals.
  • Inter-Annotator Agreement (IAA). Accurate calibration and robust quality control result in exceptional IAA scores.
  • Nuance Capture Expertise. Subtle differences in tone, style, safety, and helpfulness are critical for high-fidelity reward models.
  • Diverse Perspective Integration. Diverse viewpoints help mitigate subjective bias.
  • Efficient & Scalable Workflows. We efficiently manage large-scale preference data collection without sacrificing quality or consistency.
  • Clear Rationale Capture. Tigerfish annotators document their reasoning for analysis and model understanding.

Everything went well (as usual!) Thanks for the great work.

JR
Wells Fargo Funds Management

Diverse, accurate, meticulously crafted prompts make all the difference. 

Trust Tigerfish for

  • Creative Prompt Generation. A wide range of prompts – simple to complex, covering diverse topics and instruction types – is crucial for robust SFT.
  • Response Crafting. Clear, unambiguous, accurate "ideal responses" serve as high-quality exemplars.
  • Domain-Specific Knowledge Application. Tigerfish writers create contextually appropriate instruction pairs for specialized domains.
  • Scalability for Large Datasets. Expert project management and solid infrastructure lead to high-quality SFT datasets.

Once again you have really pulled through. I heard that you are the best and I can see for myself you are.

CC
Cisco

Deliver results in style. 

Trust Tigerfish for

  • Deep Analysis. We identify logical fallacies, factual inaccuracies, and stylistic inconsistencies.
  • Clear & Actionable Feedback. Tigerfish evaluators provide a narrative analysis of model strengths and vulnerabilities. 
  • Subject Matter Expertise. Annotators with relevant domain knowledge provide context-aware and technically accurate critiques.
  • Constructive Rewriting & Editing. When called for, Tigerfish editors suggest helpful rewrites.
  • Consistent Application. Critiques are applied consistently against established rubrics and standards.

From start to finish the process is seamless and that sure makes my life a lot easier!

HT
Intel

If you can't customize it, you won't get what you need.

Trust Tigerfish for

  • Flexibility & Adaptability. Our "built-from-scratch" experience and operational agility allow us to adapt to varied data types.
  • Collaborative Design. We design effective annotation tasks, guidelines, and workflows.
  • Prototyping. New designs are tested on a small scale to gather feedback before full-scale deployment.
  • Cross-Functional Expertise. Tigerfish brings together experts in linguistics, AI ethics, project management, and specific subject matters.

"WOW…now that is some speedy service! Yep, that’s what we love so much about you guys. You take good care of us.

KT
Intel

Fuzziness won't cut it. You need concrete suggestions to fine-tune your model.

Trust Tigerfish for

  • Strong Writing & Editing. Tigerfish writers know their craft.
  • Constructive Feedback. We provide clear, actionable suggestions.
  • Output Alignment. Rewrites and suggestions align with your goals for tone, style, and quality.
  • Enhanced Communication and Accuracy. Our processes help fine-tune your model. 

Tremendous work, and excellent customer service. It's a pleasure to work with you!

KG
Intel

Audited. Verified. Triple checked for accuracy.

"Everything went incredibly well. Communication with your office was effortless. Thanks for everything!

GM
Camp Creative

Why Tigerfish?

You have a winning AI engine. Keep that engine humming with the right tools and precise calibration. Model alignment, performance, data quality. Tigerfish makes it all work.

Case Studies

  • Secrets of RLHF in Large Language Models Part II: Reward Modeling

    Binghai Wang, et al

    Reinforcement Learning from Human Feedback (RLHF) has become a crucial technology for aligning language models with human values and intentions, enabling models to produce more helpful and harmless responses. Reward models are trained as proxies for human preferences to drive reinforcement learning optimization. 

    Read more

  • MaxMin-RLHF: Towards Equitable Alignment of Large Language Models with Diverse Human Preferences

    Souradip Chakraborty, et al.

    Reinforcement Learning from Human Feedback (RLHF) aligns language models to human preferences by employing a singular reward model derived from preference data. However, such an approach overlooks the rich diversity of human preferences inherent in data collected from multiple users.

    Read more

  • ReaLHF: Optimized RLHF Training for Large Language Models through Parameter Reallocation

    Zhiyu Mei, et al.

    Reinforcement Learning from Human Feedback (RLHF) stands as a pivotal technique in empowering large language model (LLM) applications. Since RLHF involves diverse computational workloads and intricate dependencies among multiple LLMs, directly adopting parallelization techniques from supervised training can result in sub-optimal performance.

    Read more

  • RRHF: Rank Responses to Align Language Models with Human Feedback without tears

    Zheng Yuan, et al.

    Reinforcement Learning from Human Feedback (RLHF) facilitates the alignment of large language models with human preferences, significantly enhancing the quality of interactions between humans and models. InstructGPT implements RLHF through several stages, including Supervised Fine-Tuning (SFT), reward model training, and Proximal Policy Optimization (PPO).

    Read more

  • Teaching Large Language Models to Reason with Reinforcement Learning

    Alex Havrilla, et al.

    Reinforcement Learning from Human Feedback  has emerged as a dominant approach for aligning LLM outputs with human preferences. Inspired by the success of RLHF, we study the performance of multiple algorithms that learn from feedback (Expert Iteration, Proximal Policy Optimization , Return-Conditioned RL) on improving LLM reasoning capabilities. 

    Read more

  • More RLHF, More Trust? On The Impact of Human Preference Alignment On Language Model Trustworthiness

    Aaron J. Li, et al.

    The surge in Large Language Models (LLMs) development has led to improved performance on cognitive tasks as well as an urgent need to align these models with human values in order to safely exploit their power. Despite the effectiveness of preference learning algorithms like Reinforcement Learning From Human Feedback (RLHF) in aligning human preferences, their

    Read more

  • GenderAlign: An Alignment Dataset for Mitigating Gender Bias in Large Language Models

    Tao Zhang, et al.

    Large Language Models (LLMs) are prone to generating content that exhibits gender biases, raising significant ethical concerns. Alignment, the process of fine-tuning LLMs to better align with desired behaviors, is recognized as an effective approach to mitigate gender biases. 

    Read more

  • OpenRLHF: An Easy-to-use, Scalable and High-performance RLHF Framework

    Jian Hu, et al.

    OpenRLHF, an open-source framework, enables efficient RLHF scaling for large language models by solving four-model coordination challenges. Its redesigned scheduling optimizes resources. OpenRLHF provides user-friendly RLHF, DPO, and other alignment techniques, advancing state-of-the-art LLM development.

    Read more

  • Exploratory Preference Optimization: Harnessing Implicit Q*-Approximation for Sample-Efficient RLHF

    Dylan J. Foster, et al.

    Reinforcement learning from human feedback (RLHF) has emerged as a central tool for language model alignment. We consider online exploration in RLHF, which exploits interactive access to human or AI feedback by deliberately encouraging the model to produce diverse, maximally informative responses.

    Read more

  • Active Preference Optimization for Sample Efficient RLHF

    Nirjhar Das, et al.

    Reinforcement Learning from Human Feedback (RLHF) is pivotal in aligning Large Language Models (LLMs) with human preferences. Although aligned generative models have shown remarkable abilities in various tasks, their reliance on high-quality human preference data creates a costly bottleneck in the practical application of RLHF.

    Read more

You guys were terrific. We were very happy with the customer service, the quick turnaround and the quality of the transcription.

AL
The California Wellness Foundation

A solid reputation, built on a commitment to excellence.

From earth science to space exploration, from historical archive to economic forecast, from print to air, courtroom to boardroom, Tigerfish has earned the trust of those who need it right the first time.

Our esteemed clients include:

Can't tell you how much we have enjoyed using Tigerfish. Each time I've called you in a pinch I've been able to talk to a real voice. That's huge.

BO
Presentation Strategies

Getting Started

We'd love to hear from you.

Please fill out the information below and we'll be in touch.

I am interested in the following services*

You guys were terrific. We were very happy with the customer service, the quick turnaround and the quality of the transcription.

AL
The California Wellness Foundation

Careers

Tigerfish is currently hiring for the following positions:

Data labeler

Intelligent writing, lexicographical expertise, first rate research -- if this is you, we'd like to hear from you.

For more than thirty years, Tigerfish has built a successful business by working with freelancers on the basis of mutual respect, good pay, and flexible hours. 

If you have an advanced degree and a talent for evaluation, join Tigerfish as a data labeler! 

To apply, begin here.