Find the best remote jobs. Answer a few questions and we'll deploy a powerful assistant to help you search, create alerts, and more.
1 What roles are you open to?
2 Experience level
3 Work style
Did you know? If memory is enabled, Writing.io can remember your job search preferences and help you to improve your resume, craft customized outreach and more.
Category
Reviews and evaluates AI-generated financial models, valuations, and deal structures to provide feedback on banking outputs.
Evaluate AI model performance on history problems, create original problem sets, and provide feedback to improve AI training data quality.
Evaluates AI-generated mathematics content, provides feedback to research teams, and develops evaluation rubrics for AI model improvement.
Labels, rates, and annotates AI model outputs to train and improve machine learning systems.
Senior consultant designs and builds reinforcement learning tasks and evaluation frameworks for AI model post-training at an enterprise AI platform.
Senior consultant designs and builds RLHF environments, tasks, and evaluation frameworks to support AI model post-training and refinement.
Author and review technical tasks, create problem sets, and collaborate with AI research teams to train and evaluate AI models.
Evaluates and rates AI model outputs for code generation tools like Codex and Claude, providing feedback to improve model performance.
Annotates and labels data in Italian to train and improve AI models and machine learning systems.
Labels and annotates data to train machine learning models, providing quality feedback on datasets for AI development.
Labels and annotates data in Spanish to train and improve AI/ML models.
Labels and annotates data in Italian to train and improve AI models and machine learning systems.
Labels and annotates data to train AI models, providing feedback and quality control on datasets for machine learning applications.
Annotate and label data in Spanish to train and improve AI models, providing quality feedback on datasets.
Labels, annotates, and evaluates data to improve machine learning models through remote task-based work and testing activities.
Labels data, annotates content, and evaluates ML model outputs to improve AI systems through feedback and quality assessment.
Labels, annotates, and evaluates data to improve machine learning models while providing feedback on AI system performance.
Evaluates and improves AI models by testing outputs and providing feedback on mathematics-related tasks for leading tech companies.
Physics expert trains AI models and provides feedback on physics-related outputs using Python expertise.
Reviews and evaluates AI-generated outputs, annotates/labels data, and collaborates with research teams to refine evaluation frameworks for insurance AI applications.