Claude-3.5-Sonnet
Anthropic最强大的模型(使用截至2024年10月22日的最新模型快照)。擅长编程、写作、分析和视觉处理等复杂任务。上下文窗口已缩短,以优化速度和成本。对于更长的上下文信息,请尝试Claude-3.5-Sonnet-200k。计算积分值可能会变化。
GPT-4o
OpenAI 最强大的模型 GPT-4o,采用 2024 年 11 月最新的模型快照,能够提供更自然、更具吸引力和个性化的写作,总体上提供更全面、更有见地的回答。在定量问题(数学和物理)、创意写作和许多其他具有挑战性的任务方面比 GPT-3.5 更强大。为了优化性能,上下文窗口已被缩短 -- 如需处理更长的上下文信息,请使用 GPT-4o-128k。
Grok-beta
Grok-beta是xAI最智能语言模型的早期预览版本。它具有最先进的编程、推理和问答能力,擅长处理复杂和多步骤的任务。作为与Poe集成的一部分,Grok-beta无法访问来自X或互联网的实时信息。对于更长的上下文信息,请尝试Grok-beta-128k。计算积分值可能会变化。
o1-mini
An minified version of OpenAI's o1-preview model, which is designed to spend more time thinking before it responds but at a better performance profile. It can reason through complex tasks and solve harder problems than previous models in science, coding, and math. Supports 128k tokens of context.
Llama-3.1-405B
作为Meta的Llama 3.1系列的巅峰之作,这个开源语言模型在多语言对话方面表现出色,在封闭和开源对话AI系统的众多行业基准测试中都表现优异。上下文窗口已缩短,以优化速度和成本。对于更长的上下文信息,请尝试Llama-3.1-405B-FW-128k。计算积分值可能会变化。
Gemini-1.5-Pro
Google的Gemini系列多模态模型由gemini-1.5-pro-002提供技术支持,实现了性能和速度的平衡。该模型接受整个对话中的文本、图像和视频输入,并提供文本输出。每条信息仅限一个视频。适合上传20秒以内的小视频文件。缩短了上下文窗口,用以优化速度和成本。对于更长的上下文信息,请尝试Gemini-1.5-Pro-128k和Gemini-1.5-Pro-2M。计算积分值可能会变化。
Claude-3.5-Haiku
The latest generation of Anthropic's fastest model. Claude 3.5 Haiku has fast speeds, improved instruction following, and more accurate tool use. The context window has been shortened to optimize for speed and cost. For longer context messages, please try Claude-3.5-Haiku-200k. The compute points value is subject to change.
Mistral-Medium
Mistral AI的中等大小模型。支持32k个Token(约24,000字)的上下文窗口,并且在全面的基准测试中比Mixtral-8x7b和Mistral-7b更强大。
ChatGPT-4o-Latest
OpenAI的最强大的模型。动态模型已持续更新至ChatGPT中的当前版本GPT-4o。在定量问题(数学和物理)、创意写作以及许多其他挑战性任务中,表现比ChatGPT更强。由chatgpt-4o-latest提供技术支持。上下文窗口已缩短,以优化速度和成本。对于更长的上下文信息,请尝试ChatGPT-4o-Latest-128k。
Gemini-1.5-Flash
Powered by gemini-1.5-flash-002. This smaller Gemini model is optimized for narrower or high-frequency tasks where the speed of the model’s response time matters the most. The model accepts text, image, and video input from the entire conversation and provides text output, with a restriction of one video per message. Ideal for uploading small video files under 20 seconds. Context window has been shortened to optimize for speed and cost. For Longer context messages, please try Gemini-1.5-Flash-128k and Gemini-1.5-Flash-1M. The compute points value is subject to change.
GPT-4o-Mini
OpenAI的最新模型。这款智能小型模型比GPT-3.5 Turbo更聪明、更便宜、速度一样快。上下文窗口已缩短,以优化速度和成本。对于更长的上下文信息,请尝试使用GPT-4o-Mini-128k。
Llama-3.1-70B
作为Meta的Llama 3.1系列的中型成员,这个开源语言模型实现了智能与速度的平衡,在多语言对话方面表现出色,在封闭和开源对话AI系统的众多行业基准测试中都表现优异。上下文窗口已缩短,以优化速度和成本。对于更长的上下文信息,请尝试Llama-3.1-70B-FW-128k或Llama-3.1-70B-T-128k。计算积分值可能会变化。
Llama-3.1-8B
作为Meta的Llama 3.1系列的超小型和超快速成员,这个开源语言模型在多语言对话方面表现出色,在封闭和开源对话AI系统的众多行业基准测试中都表现优异。上下文窗口已缩短,以优化速度和成本。对于更长的上下文信息,请尝试Llama-3.1-70B-FW-128k或Llama-3.1-70B-T-128k。计算积分值可能会变化。
Llama-3.1-Nemotron
Llama 3.1 Nemotron 70B from Nvidia. Excels in understanding, following instructions, writing and performing coding tasks. Strong reasoning abilities.
f1-preview
f1 enables developers to access the power of compound AI with the simplicity of prompting. As with declarative programming, users can describe what they want to achieve via prompting, without needing to specify exactly how to accomplish it. Read more here https://fireworks.ai/blog/fireworks-compound-ai-system-f1
o1-preview
OpenAI最新o1模型的早期预览版,该模型在回复前会花更多时间思考。它能够推理复杂任务,并在科学、编程和数学方面解决比以前模型更难的问题。支持128k个上下文Token。
f1-mini-preview
f1-mini enables developers to access the power of compound AI with the simplicity of prompting. As with declarative programming, users can describe what they want to achieve via prompting, without needing to specify exactly how to accomplish it. It's a smaller, faster variant version of f1. Read more here https://fireworks.ai/blog/fireworks-compound-ai-system-f1
Imagen3
Google DeepMind’s highest quality text-to-image model, capable of generating images with great detail, rich lighting, and few distracting artifacts. To adjust the aspect ratio of your image add --aspect-ratio (1:1, 16:9, 9:16, 4:3, 3:4). To remove visual specific elements, add --no (elements to remove). For simpler prompts, faster results, & lower cost, use @Imagen3-Fast
FLUX-pro-1.1
State-of-the-art image generation with top-of-the-line prompt following, visual quality, image detail and output diversity. This is the most powerful version of FLUX 1.1, use "--aspect" to select an aspect ratio (e.g --aspect 1:1). Send an image to have this model reimagine/regenerate it via FLUX Redux.
Playground-v3
来自Playground的最新图像模型,具有行业领先的能力,可以理解复杂的提示语并生成逼真的图像、徽标、排版等。允许用户在提示语末尾使用"--无"参数指定需要在图像中避免的元素(例如"高树,日光 --无雨")。(可选)使用“--长宽比”参数指定长宽比(例如"高树,日光 --长宽比1:2")。由playground.com的Playground_v3提供技术支持。
Imagen3-Fast
Google DeepMind’s highest quality text-to-image model, capable of generating images with great detail, rich lighting, and few distracting artifacts — optimized for short, simple prompts. To adjust the aspect ratio of your image add --aspect-ratio (1:1, 16:9, 9:16, 4:3, 3:4). For more complex prompts, use @Imagen3
Ideogram-v2
来自Ideogram的最新图像模型,具有行业领先的能力,可以理解复杂的提示语并生成逼真的图像、徽标、排版等。允许用户在提示语末尾使用"--长宽比"参数指定图像的长宽比(例如"高树,日光 --长宽比9:16")。有效的长宽比包括10:16、16:10、9:16、16:9、3:2、2:3、4:3、3:4 和 1:1。可以定义"--风格"参数来指定生成图像的风格(GENERAL、REALISTIC、DESIGN、RENDER_3D、ANIME)。由Ideogram提供技术支持。
StableDiffusion3.5-L
Stability.ai's StableDiffusion3.5 Large, hosted by @fal, is the Stable Diffusion family's most powerful image generation model both in terms of image quality and prompt adherence. Use "--aspect" to select an aspect ratio (e.g --aspect 1:1).
DALL-E-3
OpenAI最强大的图像生成模型。可根据用户最近的提示语生成细节丰富的高质量图像。使用"--长宽比"可选择长宽比(例如 --长宽比1:1)。有效的长宽比包括1:1、7:4和4:7。
FLUX-pro
State-of-the-art image generation with top of the line prompt following, visual quality, image detail and output diversity. This is the most powerful version of FLUX.1. Use "--aspect" to select an aspect ratio (e.g --aspect 1:1). Send an image to have this model reimagine/regenerate it via FLUX Redux.
FLUX-schnell
Turbo speed image generation with strengths in prompt following, visual quality, image detail and output diversity. This is the fastest version of FLUX.1. Use "--aspect" to select an aspect ratio (e.g --aspect 1:1). Send an image to have this model reimagine/regenerate it via FLUX Redux.
FLUX-dev
High-performance image generation with top of the line prompt following, visual quality, image detail and output diversity. This is a more efficient version of FLUX-pro, balancing quality and speed. Use "--aspect" to select an aspect ratio (e.g --aspect 1:1). Send an image to have this model reimagine/regenerate it via FLUX Redux.
StableDiffusionXL
Generates high quality images based on the user's most recent prompt. Allows users to specify elements to avoid in the image using the "--no" parameter at the end of the prompt. Select an aspect ratio with "--aspect". (e.g. "Tall trees, daylight --no rain --aspect 7:4"). Valid aspect ratios are 1:1, 7:4, 4:7, 9:7, 7:9, 19:13, 13:19, 12:5, & 5:12. Powered by Stable Diffusion XL.
Llama-3.1-405B-T
Llama 3.1 405B Instruct from Meta. Supports 128k tokens of context. The points price is subject to change.
Ideogram
擅长使用文本提示语创建高质量图像。其卓越的提示语遵从性和先进的文本渲染能力使其能够处理广泛的创意请求。允许用户在提示语末尾使用"--长宽比"参数指定图像的长宽比(例如"高树,日光 --长宽比9:16")。有效的长宽比包括10:16、16:10、9:16、16:9、3:2、2:3、4:3、3:4 和 1:1。由Ideogram提供技术支持。
PlaygroundUpscaler
使用Playground AI模型可将图像分辨率提升4倍,同时增强图像的质量和细节。
Playground-v2.5
生成视觉效果引人注目的图像,具有出色的色彩活力和对比度,更精细的人类相关细节,并支持多种长宽比。允许用户在提示语末尾使用"--无"参数指定需要在图像中避免的元素(例如"高树,日光 --无雨")。(可选)使用“--长宽比”参数指定长宽比(例如"高树,日光 --长宽比1:2")。由playground.com的Playground_v2.5提供技术支持。
Pika-1.0
Generate a video using a text prompt, an image, or a video. Pass in optional parameters using flags following your prompt. (e.g. "A cat riding a skateboard --aspect 16:9 --framerate 24") Additional parameters: --aspect (e.g. 16:9, 19:13) --framerate (between 8-24) --motion (motion intensity, between 0-4) --gs (guidance scale/relevance to text, between 8-24) --no (negative prompt, e.g. ugly, scary) --seed (e.g. 12345) Camera control (use one at a time): --zoom ("in" or "out") --pan ("left" or "right") --tilt ("up" or "down") --rotate ("cw" or "ccw") Powered by Pika.
Hailuo-AI
Best-in-class text and image to video model by MiniMax.
LivePortrait
Animates given portraits with the motion's in the video. Powered by fal.ai
Llama-3.1-405B-FW-128k
The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes. The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
Llama-3.1-8B-T-128k
Llama 3.1 8B Instruct from Meta. Supports 128k tokens of context. The points price is subject to change.
Llama-3.1-70B-FW-128k
The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes. The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
Llama-3.1-70B-T-128k
Llama 3.1 70B Instruct from Meta. Supports 128k tokens of context. The points price is subject to change.
Llama-3.1-8B-FW-128k
The Meta Llama 3.1 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction tuned generative models in 8B, 70B and 405B sizes. The Llama 3.1 instruction tuned text only models (8B, 70B, 405B) are optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
Llama-3-70b-Groq
Llama 3 70b由Groq LPU?推理引擎提供技术支持
Gemma-2-27b-T
Gemma 2 27B Instruct from Google. The points price is subject to change.
Claude-3-Sonnet
Anthropic的Claude-3-Sonnet实现了智能和速度的平衡,并缩短了上下文窗口,用以优化速度和成本。对于更长的上下文信息,请尝试Claude-3-Sonnet-200k。计算积分值可能会变化。
Claude-3-Haiku
Anthropic的Claude 3 Haiku在性能、速度和成本方面超越了同类智能模型,无需专门微调。缩短了上下文窗口,用以优化速度和成本。对于更长的上下文信息,请尝试Claude-3-Haiku-200k。计算积分值可能会变化。
Claude-3-Opus
Anthropic的Claude 3 Opus能够处理复杂分析、多步骤的长任务以及高阶数学和编码任务。缩短了上下文窗口,用以优化速度和成本。对于更长的上下文信息,请尝试Claude-3-Opus-200k。计算积分值可能会变化。
Gemini-1.5-Flash-128k
Powered by gemini-1.5-flash-002. This smaller Gemini model is optimized for narrower or high-frequency tasks where the speed of the model’s response time matters the most. The model accepts text, image, and video input from the entire conversation and provides text output, with a restriction of one video per message. The compute points value is subject to change.
Gemini-1.5-Pro-128k
Google的Gemini系列多模态模型由gemini-1.5-pro-002提供技术支持,实现了性能和速度的平衡。该模型接受整个对话中的文本、图像和视频输入,并提供文本输出。每条信息仅限一个视频。计算积分值可能会变化。
Gemini-1.5-Flash-1M
Powered by gemini-1.5-flash-002. This smaller Gemini model is optimized for narrower or high-frequency tasks where the speed of the model’s response time matters the most. The model accepts text, image, and video input from the entire conversation and provides text output, with a restriction of one video per message. The compute points value is subject to change.
Gemini-1.5-Pro-2M
Google的Gemini系列多模态模型由gemini-1.5-pro-002提供技术支持,实现了性能和速度的平衡。该模型接受整个对话中的文本、图像和视频输入,并提供文本输出。每条信息仅限一个视频。计算积分值可能会变化。
GPT-4o-Mini-128k
OpenAI的最新模型。这款智能小型模型比GPT-3.5 Turbo更聪明、更便宜、速度一样快。
GPT-4o-128k
OpenAI 最强大的模型 GPT-4o,采用最新的2024年11月模型快照,能够提供更自然、更具吸引力和个性化的写作,总体上提供更全面、更有见地的回答。在定量问题(数学和物理)、创意写作和许多其他具有挑战性的任务方面都比 GPT-3.5 更强大。
ChatGPT-4o-Latest-128k
OpenAI的最强大的模型。动态模型已持续更新至ChatGPT中的当前版本GPT-4o。在定量问题(数学和物理)、创意写作以及许多其他挑战性任务中,表现比ChatGPT更强。由chatgpt-4o-latest提供技术支持。
GPT-4-Turbo
由OpenAI的GPT-4 Turbo with Vision提供技术支持。在定量问题、创意写作以及许多其他挑战性任务中,表现比GPT-3.5更强。在英语和编码文本上可与GPT-4o媲美,但在非英语文本方面表现较弱。
Gemini-1.5-Pro-Search
Gemini 1.5 Pro 由gemini-1.5-pro-002.提供技术支持,由Grounding和Google搜索提供功能增强支持,可以提供最新信息,并实现了模型性能与速度的平衡。Grounding模型目前仅支持文本。
Gemini-1.0-Pro
The multi-modal model from Google's Gemini family that balances model performance and speed. Exhibits strong generalist capabilities and excels particularly in cross-modal reasoning. The model accepts text, image, and video input and provides text output. It considers only the images and videos from the latest user message, with a restriction of one video per message.
Gemini-1.5-Flash-Search
Gemini 1.5 Flash由gemini-1.5-flash-002提供技术支持,由Grounding和Google搜索提供功能增强支持,可以提供最新信息,并实现了模型性能与速度的平衡。Grounding模型目前仅支持文本。
StableDiffusion3
Understands complex prompts, supports multiple languages, and has improved spelling over SDXL to generate high-quality images. It is especially good at typography and prompt accuracy. Stability AI has partnered with Fireworks AI, the fastest and most reliable API platform in the market, to deliver Stable Diffusion 3 and Stable Diffusion 3 Turbo.
SD3-Turbo
Distilled, few-step version of Stable Diffusion 3, which understands complex prompts, supports multiple languages, and has improved spelling over SDXL to generate high-quality images. It is especially good at typography and prompt accuracy. Stability AI has partnered with Fireworks AI, the fastest and most reliable API platform in the market, to deliver Stable Diffusion 3 and Stable Diffusion 3 Turbo.
StableDiffusion3-2B
Stable Diffusion v3 Medium - by fal.ai
SD3-Medium
2-billion parameter version of Stable Diffusion 3, which understands complex prompts, supports multiple languages, and has improved spelling over SDXL to generate high-quality images. It is especially good at typography and prompt accuracy. Stability AI has partnered with Fireworks AI, the fastest and most reliable API platform in the market, to deliver Stable Diffusion 3 and Stable Diffusion 3 Turbo.
Llama-3-70B-T
Llama 3 70B Instruct from Meta. The points price is subject to change.
Llama-3-70b-Inst-FW
Meta's Llama-3-70B-Instruct hosted by Fireworks AI.
Mixtral8x22b-Inst-FW
Mixtral 8x22B Mixture-of-Experts instruct model from Mistral hosted by Fireworks.
Command-R
I can search the web for up to date information and respond in over 10 languages!
Gemma-2-9b-T
Gemma 2 9B Instruct from Google. The points price is subject to change.
Mistral-Large-2
Mistral最新的文本生成模型(Mistral-Large-2407)具有顶级推理能力,可用于复杂的多语言推理任务,包括文本理解、转换和代码生成。为了优化速度和成本,上下文窗口已缩短至32k个Tocken(约24,000个单词)。对于更长的上下文信息,请尝试Mistral-Large-2-128k。
Mistral-Large-2-128k
Mistral最新的文本生成模型(Mistral-Large-2407)具有顶级推理能力,可用于复杂的多语言推理任务,包括文本理解、转换和代码生成。这个机器人拥有由此模型支持的全部128k上下文窗口。
RekaCore
Reka's largest and most capable multimodal language model. Works with text, images, and video inputs. 8k context length.
RekaFlash
Reka's efficient and capable 21B multimodal model optimized for fast workloads and amazing quality. Works with text, images and video inputs.
Command-R-Plus
A supercharged version of Command R. I can search the web for up to date information and respond in over 10 languages!
Claude-3.5-Sonnet-June
Anthropic的传统Sonnet 3.5模型,具体为2024年6月版本(如需最新版本,请尝试Claude-3.5-Sonnet)。擅长编程、写作、分析和视觉处理等复杂任务;相比2024年10月版本更为详细。subject to change.
GPT-3.5-Turbo
由gpt-3.5-turbo提供技术支持。
GPT-3.5-Turbo-16k
由gpt-3.5-turbo-16k提供技术支持。
GPT-4-Turbo-128k
由GPT-4 Turbo with Vision提供技术支持。
Claude-3.5-Sonnet-200k
Anthropic最强大的模型(使用截至2024年10月22日的最新模型快照)。擅长自主编码和视觉处理等复杂任务。此外,它在处理长文档时表现出色,确保搜索和检索等任务的准确性,并能比较多个长文档。支持200k个上下文Token。计算积分值可能会变化。
Claude-3.5-Haiku-200k
The latest generation of Anthropic's fastest model. Claude 3.5 Haiku has fast speeds, improved instruction following, and more accurate tool use. The compute points value is subject to change.
Claude-3.5-Sonnet-June-200k
Anthropic的传统Sonnet 3.5模型,具体为2024年6月版本(如需最新版本,请尝试Claude-3.5-Sonnet-200k)。擅长编程、写作、分析和视觉处理等复杂任务;相比2024年10月版本更为详细。subject to change.
Claude-3-Sonnet-200k
Anthropic的Claude-3-Sonnet实现了智能和速度的平衡。支持200k个上下文Token。计算积分值可能会变化。
Claude-3-Haiku-200k
Anthropic的Claude 3 Haiku在性能、速度和成本方面超越了同类智能模型,无需专门微调。计算积分值可能会变化。
Claude-3-Opus-200k
Anthropic的Claude 3 Opus能够处理复杂分析、多步骤的长任务以及高阶数学和编码任务。支持200k个上下文Token。计算积分值可能会变化。
Mixtral-8x7B-Chat
Mixtral 8x7B Mixture-of-Experts model from Mistral AI fine-tuned for instruction following Hosted by Fireworks.ai: https://app.fireworks.ai/models/fireworks/mixtral-8x7b-instruct
Qwen-1.5-110B-T
Qwen1.5 (通义千问1.5) 110B,基于阿里巴巴自研大模型的AI助手,尤其擅长中文对话。 Alibaba's general-purpose model which excels particularly in Chinese-language queries. The points price is subject to change.
Qwen2-72B-Instruct-T
Qwen2 (通义千问2) 72B,基于阿里巴巴自研大模型的AI助手,尤其擅长中文对话。 Alibaba's general-purpose model which excels particularly in Chinese-language queries. The points price is subject to change.
Qwen-72B-T
Qwen1.5 (通义千问1.5) 72B,基于阿里巴巴自研大模型的AI助手,尤其擅长中文对话。 Alibaba's general-purpose model which excels particularly in Chinese-language queries. The points price is subject to change.
Claude-2
Anthropic的Claude 2模型。缩短了上下文窗口,用以优化速度和成本。对于更长的上下文信息,请尝试Claude-2-100k。
Claude-2-100k
Anthropic的Claude 2模型,支持100k个Token(约75,000个词)的上下文窗口。特别擅长创意写作。
GPT-4o-Aug
OpenAI 最强大的模型 GPT-4o,使用2024年8月的模型快照。在定量问题(数学和物理)、创意写作和许多其他具有挑战性的任务方面比 GPT-3.5 更强。要使用2024年11月的最新模型快照,请访问 https://poe.com/GPT-4o。为了优化性能,上下文窗口已被缩短 -- 如需处理更长的上下文信息,请使用 https://poe.com/GPT-4o-Aug-128k。
GPT-4o-Aug-128k
OpenAI 最强大的模型 GPT-4o,使用2024年8月的模型快照。在定量问题(数学和物理)、创意写作和许多其他具有挑战性的任务方面都比 GPT-3.5 更强。要使用2024年11月的最新模型快照,请访问 https://poe.com/GPT-4o。
GPT-4-Classic
OpenAI的GPT-4模型。文本输入由gpt-4-0613(非Turbo)提供技术支持,图像输入由gpt-4o提供技术支持。
GPT-4-Classic-0314
OpenAI的GPT-4模型。文本输入由gpt-4-0314(非Turbo)提供技术支持,图像输入由gpt-4o提供技术支持。
Google-PaLM
由Google的PaLM 2 chat-bison@002模型提供技术支持。支持8k个Token的上下文窗口。
Llama-3-8b-Groq
Llama 3 8b由Groq LPU?推理引擎提供技术支持
Llama-3-8B-T
Llama 3 8B Instruct from Meta. The points price is subject to change.
MythoMax-L2-13B
这个模型由Gryphe基于LLama-2-13B创建,擅长角色扮演和故事编写。
Code-Llama-34b
Meta的Code-Llama-34b-instruct。擅长生成和讨论代码,并支持16k个Token的上下文窗口。
Solar-Pro
The most intelligent LLM on a single GPU. -- Superior Instruction-Following Capability: Delivers exceptional accuracy in understanding and executing complex instructions. -- Advanced Structured Text Understanding: Excels in processing structured formats such as HTML, Markdown, and tables. -- Leading Multilingual Performance: Achieves top-tier results in Korean, English, and Japanese General Intelligence among single-GPU models. -- Domain-Specific Expertise: Demonstrates unparalleled knowledge in critical enterprise domains, including finance, healthcare, and legal, among the models which fit in a single GPU.
GPT-3.5-Turbo-Instruct
由gpt-3.5-turbo-instruct提供技术支持。
GPT-3.5-Turbo-Raw
由gpt-3.5-turbo提供技术支持,无系统提示词。
Claude-2.1-200k
Anthropic的Claude 2.1在性能上有所提升,相较于Claude 2,其上下文大小增加到了200k个令牌。
Mixtral-8x7b-Groq
在 Groq LPU? 推理引擎上运行的 Mixtral 8x7B 面向用户开放。请访问 console.groq.com,获取 API 访问权限。
remove-background
Remove background from your images
Mistral-7B-v0.3-T
Mistral Instruct 7B v0.3 from Mistral AI. The points price is subject to change.
Llama-3.2-11B
Llama goes multimodal! With the Llama 3.2 new 11B vision LLM, you will now be able to reason on high-resolution images such as charts, graphs, or image captioning. This is the vision instruct-tuned model from Meta, hosted by Fireworks AI. This bot only supports 1 image per message currently.
Llama-3.2-90B-FW-131k
Llama goes multimodal! With the Llama 3.2 new 90B vision LLM, you will now be able to reason on high-resolution images such as charts, graphs, or image captioning. This is the vision instruct-tuned model from Meta, hosted by Fireworks AI. This bot only supports 1 image per message currently.
Tako
Tako是一个机器人,它可以将您关于股票、体育、经济或政治的问题转化为来源可信的交互式可分享的知识卡片。Tako的知识图谱完全基于权威的实时数据提供商构建,可嵌入您的应用程序、研究和故事写作中。您可以通过在提问/问题末尾输入`--specificity 30`(或0-100之间的值)来调整特异性阈值;默认值为60。
Llama-3.1-405B-Base
SOTA base completion model that enables seamless interactions.
Llama-3-70B-FP16
A highly efficient and powerful model designed for a veriety of tasks with 128K context length.
Llama-3.1-8B-FP16
The smallest and fastest member of the Llama 3.1 family, offering exceptional efficiency and rapid response times with 128K context length.
Llama-3.1-70B-FP16
The best LLM at its size with faster response times compared to the 405B model with 128K context length.
Llama-3.1-405B-FP16
The Biggest and Best open-source AI model trained by Meta, beating GPT-4o across most benchmarks. This bot is in BF16 and with 128K context length.
Hermes-3-70B
Hermes 3 is the latest version of our flagship Hermes series of LLMs by Nous Research. Hermes 3 is a generalist language model with many improvements over Hermes 2, including advanced agentic capabilities, much better roleplaying, reasoning, multi-turn conversation, long context coherence, and improvements across the board. The ethos of the Hermes series of models is focused on aligning LLMs to the user, with powerful steering capabilities and control given to the end user.
Flux-1-Schnell-FW
FLUX.1 [schnell] is a 12 billion parameter rectified flow transformer capable of generating images from text descriptions. Key Features 1. Cutting-edge output quality and competitive prompt following, matching the performance of closed source alternatives. 2. Trained using latent adversarial diffusion distillation, FLUX.1 [schnell] can generate high-quality images in only 1 to 4 steps. 3. Released under the apache-2.0 licence, the model can be used for personal, scientific, and commercial purposes.
Flux-1-Dev-FW
FLUX.1 [dev] is a 12 billion parameter rectified flow transformer capable of generating images from text descriptions. Key Features 1. Cutting-edge output quality, second only to our state-of-the-art model FLUX.1 [pro]. 2. Competitive prompt following, matching the performance of closed source alternatives. 3. Trained using guidance distillation, making FLUX.1 [dev] more efficient. 4. Open weights to drive new scientific research, and empower artists to develop innovative workflows. 5. Generated outputs can be used for personal, scientific, and commercial purposes as described in the FLUX.1 [dev] Non-Commercial License.
Mochi-preview
Open state-of-the-art video generation model with high-fidelity motion and strong prompt adherence. Only text-to-video is supported at the moment.
StableDiffusion3.5-T
Faster version of Stable Diffusion 3 Large, hosted by @fal. Excels for fast image generation. Use "--aspect" to select an aspect ratio (e.g --aspect 1:1).
Runway
Runway的Gen-3 Alpha Turbo模型可根据您的提示创建同类最佳、可控、高保真的视频生成效果。支持文本输入和图像输入,但我们建议使用图像输入以获得最佳效果。使用--长宽比(16:9、9:16、横向、纵向)来生成横向/纵向视频。
SD3.5-Large-Turbo
Stable Diffusion 3.5 Large Turbo is a Multimodal Diffusion Transformer (MMDiT) text-to-image model with Adversarial Diffusion Distillation (ADD) that features improved performance in image quality, typography, complex prompt understanding, and resource-efficiency, with a focus on fewer inference steps. Powered By Stability AI and hosted by fireworks.ai
SD3.5-Medium
Stable Diffusion 3.5 Medium is a Multimodal Diffusion Transformer (MMDiT) text-to-image model with Adversarial Diffusion Distillation (ADD) that features improved performance in image quality, typography, complex prompt understanding, and resource-efficiency, with a focus on fewer inference steps. Powered By Stability AI and hosted by fireworks.ai
SD3.5-Large
Stable Diffusion 3.5 Large is a Multimodal Diffusion Transformer (MMDiT) text-to-image model that features improved performance in image quality, typography, complex prompt understanding, and resource-efficiency. Powered By Stability AI, hosted by fireworks.ai.
Recraft-V3
Recraft V3, state of the art image generation. Use --style for styles, and --aspect for aspect ratio configuration. Available styles: realistic_image, digital_illustration, vector_illustration, realistic_image/b_and_w, realistic_image/hard_flash, realistic_image/hdr, realistic_image/natural_light, realistic_image/studio_portrait, realistic_image/enterprise, realistic_image/motion_blur, digital_illustration/pixel_art, digital_illustration/hand_drawn, digital_illustration/grain, digital_illustration/infantile_sketch, digital_illustration/2d_art_poster, digital_illustration/handmade_3d, digital_illustration/hand_drawn_outline, digital_illustration/engraving_color, digital_illustration/2d_art_poster_2, vector_illustration/engraving, vector_illustration/line_art, vector_illustration/line_circuit, vector_illustration/linocut
Dream-Machine
Luma AI's Dream Machine is an AI model that makes high-quality, realistic videos fast from text and images. Iterate at the speed of thought, create action-packed shots, and dream worlds with consistent characters on Poe today! To specify the aspect ratio of your video add --aspect_ratio (1:1, 16:9, 9:16, 4:3, 3:4, 21:9, 9:21). To loop your video add --loop True.
Grok-beta-128k
Grok-beta是xAI最智能语言模型的早期预览版本。它具有最先进的编程、推理和问答能力,擅长处理复杂和多步骤的任务。作为与Poe集成的一部分,Grok-beta无法访问来自X或互联网的实时信息。计算积分值可能会变化。
Flux-Schnell-T
Lightning-fast AI image generation model that excels in producing high-quality visuals in just seconds. Great for quick prototyping or real-time use cases. This is the fastest version of FLUX.1.
FLUX-pro-1-T
The flagship model in the FLUX.1 lineup. Excels in prompt following, visual quality, image detail, and output diversity.
FLUX-pro-1.1-T
The best state of the art image model from BFL. FLUX 1.1 Pro generates images six times faster than its predecessor, FLUX 1 Pro, while also improving image quality, prompt adherence, and output diversity.
Qwen-2.5-7B-T
Qwen 2.5 7B from Alibaba. Excels in coding, math, instruction following, natural language understanding, and has great multilangual support with more than 29 languages.
Qwen-2.5-72B-T
Qwen 2.5 72B from Alibaba. Excels in coding, math, instruction following, natural language understanding, and has great multilangual support with more than 29 languages. Delivering results on par with Llama-3-405B despite using only one-fifth of the parameters.
Qwen2.5-Coder-32B
Qwen2.5-Coder is the latest series of code-specific Qwen large language models (formerly known as CodeQwen), developed by Alibaba.
FLUX-pro-1.1-ultra
State-of-the-art image generation with four times the resolution of standard FLUX-1.1-pro. Best-in-class prompt adherence and pixel-perfect image detail. Use "--aspect" to select an aspect ratio (e.g --aspect 1:1). Add "--raw" (no other arguments needed) for an overall less processed, everyday aesthetic. Send an image to have this model reimagine/regenerate it via FLUX Redux.
Pika-1.5
Pika's latest and most powerful video generation model (now with Pikaffects). Generate a video using a text prompt, an image, or a video. Pass in optional parameters using flags following your prompt. (e.g. "A cat riding a skateboard --tilt up --no dogs --pikaffect squish") Additional parameters: --pikaffect (squish, explode, crush, inflate, cake-ify, melt) --aspect (e.g. 16:9, 19:13) --framerate (between 8-24) --motion (motion intensity, between 0-4) --gs (guidance scale/relevance to text, between 8-24) --no (negative prompt, e.g. ugly, scary) --seed (e.g. 12345) Camera control (use one at a time): --zoom ("in" or "out") --pan ("left" or "right") --tilt ("up" or "down") --rotate ("cw" or "ccw") Powered by Pika.
Haiper2.0
Text- and image-to-video model by Haiper.
Python
执行用户信息中的Python代码(版本3.11)并输出结果。如果用户信息中包含代码块(由三个反引号包围),则只执行这些代码块。以下库已在此机器人的运行环境中自动导入 -- numpy, pandas, requests, matplotlib, scikit-learn, torch, PyYAML, tensorflow, scipy, pytest -- 以及约150个最常用的Python库。
Luma-Photon
Luma Photon delivers industry-specific visual excellence, crafting images that align perfectly with professional standards - not just generic AI art. From marketing to creative design, each generation is purposefully tailored to your industry's unique requirements.
Luma-Photon-Flash
Luma Photon delivers industry-specific visual excellence, crafting images that align perfectly with professional standards - not just generic AI art. From marketing to creative design, each generation is purposefully tailored to your industry's unique requirements.
Qwen-2.5-Coder-32B-T
A powerful model from Alibaba with 32.5B parameters, excelling in coding, math, and multilingual tasks. It offers strong performance across various domains while being more compact than larger models.
QwQ-32B-Preview-T
An experimental research model focused on advancing AI reasoning capabilities. On par with O-1 mini and preview. It demonstrates exceptional performance in complex problem-solving, achieving impressive scores on mathematical and scientific reasoning benchmarks (65.2% on GPQA, 50.0% on AIME, 90.6% on MATH-500)
Kling-Pro-v1.5
Kling v1.5 video generation bot, hosted by fal.ai. For best results, upload an image attachment.
Cartesia
Generates audio based on your prompt using Cartesia's Sonic text-to-speech model in your voice of choice (see below) Add --voice [Voice Name] to the end of a message to customize the voice used or to handle different language inputs (e.g. 你好 --voice Chinese Commercial Woman). The following voices are supported covering 14 languages (English, French, German, Spanish, Portuguese, Chinese, Japanese, Hindi, Italian, Korean, Dutch, Polish, Russian, Swedish, Turkish): 1920's Radioman Anna Anime Girl Australian Customer Support Man Australian Male Australian Narrator Lady Barbershop Man British Customer Support Lady British Lady British Narration Lady British Reading Lady California Girl Calm Lady Child Chinese Commercial Woman Chinese Female Conversational Chinese Woman Narrator Commercial Lady Customer Support Lady Customer Support Man Doctor Mischief Female Nurse Female Storyteller Lady French Conversational Lady French Narrator Lady French Narrator Man Friendly Australian Man Friendly Brazilian Man Friendly Reading Man Friendly Sidekick German Conversation Man German Conversational Woman German Reporter Woman German Storyteller Man German Woman Helpful French Lady Helpful Woman Hindi Narrator Woman Hinglish Speaking Lady Indian Customer Support Lady Indian Lady Indian Man Italian Narrator Woman Japanese Children Book Japanese Male Conversational Japanese Man Book Japanese Woman Conversational John Kentucky Woman Korean Narrator Woman Laidback Woman Madame Mischief Maria Meditation Lady Mexican Woman Middle Eastern Woman Midwestern Woman Movieman New York Man Newslady Newsman Nonfiction Man Pleasant Brazilian Lady Pleasant Man Polish Narrator Woman Reading Lady Reflective Woman Russian Calm Lady Russian Narrator Man 1 Russian Narrator Man 2 Russian Narrator Woman Salesman Sarah Sarah Curious Southern Woman Spanish Narrator Lady Spanish-speaking Lady Spanish-speaking Man Spanish-speaking Reporter Man Spanish-speaking Storyteller Man Sportsman Stern French Man Sweet Lady Swedish Calm Lady The Merchant Turkish Narrator Man Turkish Calm Man Tutorial Man Wise Lady Wise Man Wizardman Yogaman
Qwen-QwQ-32b-preview
Qwen QwQ model focuses on advancing AI reasoning, and showcases the power of open models to match closed frontier model performance. QwQ-32B-Preview is an experimental release, comparable to o1 and surpassing GPT-4o and Claude 3.5 Sonnet on analytical and reasoning abilities across GPQA, AIME, MATH-500 and LiveCodeBench benchmarks. Note: This model is served experimentally by Fireworks.AI
Llama-3.3-70B-FW
Meta's Llama 3.3 70B Instruct, hosted by Fireworks AI. Llama 3.3 70B is a new open source model that delivers leading performance and quality across text-based use cases such as synthetic data generation at a fraction of the inference cost, improving over Llama 3.1 70B.
Llama-3.3-70B
Llama 3.3 70B – with similar performance as Llama 3.1 405B while being faster and much smaller! Llama 3.3 70B is a new open source model that delivers leading performance and quality across text-based use cases such as synthetic data generation at a fraction of the inference cost, improving over Llama 3.1 70B.
Llama-3.3-70B-FP16
The Meta Llama 3.3 multilingual large language model (LLM) is a pretrained and instruction tuned generative model in 70B (text in/text out). The Llama 3.3 instruction tuned text only model is optimized for multilingual dialogue use cases and outperform many of the available open source and closed chat models on common industry benchmarks.
ElevenLabs
ElevenLabs' leading text-to-speech technology converts your text into natural-sounding speech,