Model Details
Nano Banana 2/Edit is a high-efficiency model for editing images with text prompts. It excels at making specific, context-aware changes to your images, from simple additions to complete stylistic transformations. Provide one or more images and a descriptive prompt to modify elements, apply new styles, or create composite scenes while maintaining the original image's lighting, perspective, and overall coherence.
This model is ideal for a variety of creative and commercial tasks, including: - **Inpainting and Masking:** Conversationally define a "mask" to edit a specific part of an image while leaving the rest untouched. For example, instruct the model to "change only the blue sofa to be a vintage, brown leather chesterfield," and it will preserve the rest of the room. - **Adding & Removing Elements:** Seamlessly add new objects to your images or remove unwanted ones. The model intelligently matches the style, lighting, and perspective of the original photo, making edits appear natural. - **Style Transfer:** Transform a photograph into a different artistic style. Provide a reference image and instruct the model to recreate it in the style of a famous artist or a specific art movement.
### Example Usage ```javascript import { modelrunner } from "@modelrunner/client";
const result = await modelrunner.subscribe("google/nano-banana-2/edit", { input: { images: ["https://ai.google.dev/static/gemini-api/docs/images/cat_photo.png"], prompt: "Using the provided image of my cat, please add a small, knitted wizard hat on its head. Make it look like it's sitting comfortably and matches the soft lighting of the photo.", aspect_ratio: "1:1", resolution: "2K" }, }); ```
## Safety & Content Moderation
The `safety_settings` parameter allows you to adjust content moderation filters for your specific use case. This setting can only be configured via the API. You can set a blocking threshold for four harm categories: Harassment, Hate Speech, Sexually Explicit, and Dangerous Content.
Each safety setting consists of a `category` and a `threshold`.
- **`threshold`**: The confidence level at which to block content for the given category. - `BLOCK_NONE`: Always show content, regardless of the probability of it being unsafe. - `BLOCK_ONLY_HIGH`: Block content when there is a high probability of it being unsafe. - `BLOCK_MEDIUM_AND_ABOVE`: Block content when there is a medium or high probability of it being unsafe. - `BLOCK_LOW_AND_ABOVE`: Block content when there is a low, medium, or high probability of it being unsafe.
```javascript import { modelrunner } from "@modelrunner/client";
const result = await modelrunner.subscribe("google/nano-banana-2/edit", { input: { images: ["https://ai.google.dev/static/gemini-api/docs/images/cat_photo.png"], prompt: "A cat wearing a wizard hat.", safety_settings: [ { "category": "HARM_CATEGORY_HATE_SPEECH", "threshold": "BLOCK_ONLY_HIGH" }, { "category": "HARM_CATEGORY_DANGEROUS_CONTENT", "threshold": "BLOCK_MEDIUM_AND_ABOVE" } ] }, }); ```

