fenic 0.7.0 adds Google's latest Gemini 3 Flash Preview with granular thinking-level control, quality-of-life improvements for session management, and a repositioned README centered on use cases and context engineering for agents.
What's in it for you
- Gemini 3 Flash Preview support with four thinking levels (
high,medium,low,minimal) — fine-tune cost/quality tradeoffs per call. - Refined thinking-level API with per-model validation and auto-created profiles for every supported level.
- Optional usage summary suppression via
session.stop(skip_usage_summary=True)for automated environments and testing. - Use-case-first README restructured around what you can build, not library internals.
Gemini 3 Flash Preview with granular thinking levels
Gemini 3 Flash Preview joins the previously added Gemini 3 Pro Preview. The thinking-level API now supports four distinct levels — high, medium, low, and minimal — with per-model validation. Gemini 3 Flash supports all four; Gemini 3 Pro supports high and low.
fenic auto-creates profiles for every supported thinking level, so switching between cost and quality tradeoffs is a one-line change.
from fenic.api.session.config import GoogleDeveloperLanguageModel
config = SessionConfig(
semantic=SemanticConfig(
language_models={
"gemini-flash": GoogleDeveloperLanguageModel(
model_name="gemini-3-flash-preview",
rpm=100,
tpm=1000,
)
}
)
)
# Auto-created profiles: "high", "medium", "low", "minimal"
df.semantic.extract(
"content",
schema=MySchema,
model=ModelAlias(name="gemini-flash", profile="low")
)Attempting to use an unsupported thinking level now raises a clear error indicating which levels are available for the given model.
Optional usage summary suppression
Session.stop() now accepts skip_usage_summary to suppress the default usage summary printout. Useful for automated environments, CI pipelines, or applications that handle metrics reporting separately.
session.stop(skip_usage_summary=True)Works consistently across both local and cloud backends.
README overhaul
The README has been restructured around declarative context engineering for agents. Instead of leading with library internals, it highlights what you can build:
- Memory and personalization pipelines
- Retrieval and knowledge systems
- Context operations (chunking, parsing, embedding)
- Structured context from raw data
Long reference sections are collapsed behind expandable <details> blocks for scannability.
Bug fixes
- Removed
gpt-5.1-codexfrom the model catalog — fenic does not currently support the responses API required by this model. - Cloud session respects
skip_usage_summary— the cloud backend now correctly honors this parameter, matching local backend behavior. - Fixed module shadowing —
examples/mcp/renamed toexamples/mcp_server/to prevent shadowing themcpPython module.
Upgrading from v0.6.x
CompletionModelParameters.supports_thinking_level(bool) has been replaced bysupported_thinking_levels(optional set of thinking level strings). Code that directly accesses model parameter internals will need to be updated.- Updated
google-genaidependency andfastmcpto v2.13.0 (security fix).
Try it out and tell us what you build
pip install --upgrade fenicRead the latest docs at docs.fenic.ai. Questions or ideas — file an issue with a small reproduction case.
