Skip to content

feat: Consider increasing otel attribute limits #1964

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
Mortalife opened this issue Apr 22, 2025 · 3 comments
Open

feat: Consider increasing otel attribute limits #1964

Mortalife opened this issue Apr 22, 2025 · 3 comments

Comments

@Mortalife
Copy link
Contributor

Mortalife commented Apr 22, 2025

Is your feature request related to a problem? Please describe.

While using the Vercel AI SDK I noticed that the current value for the span/trace attribute length aren't sufficient to provide meaningful trace information from the AI SDK. This results in truncated logs.

I personally was using the LangfuseExporter and while the exporter was added successfully it suffered from the same limitations placed on traces for the trigger platform.

It would be great if a solution could be found to enable these SDK's to export the information in it's entirety so as to provide the information I'm looking for when debugging/monitoring my LLM applications.

Describe the solution you'd like to see

Considering these attributes will contain an entire message history, you're looking at limits vastly exceeding what you have set currently. The current limit doesn't even cover the system prompt. With that in mind, I can't really give you much of an idea of what the limits should look like other than "close to or exceeding model context windows".

Describe alternate solutions

A way to provide the raw spans to exporters without truncation.

Additional information

No response

@ericallam
Copy link
Member

We could increase the client side limits while truncating attribute value lengths on the server to prevent excessive storage usage as a quick fix

@robechun
Copy link

Following—also running into the same problem

@Mortalife
Copy link
Contributor Author

We could increase the client side limits while truncating attribute value lengths on the server to prevent excessive storage usage as a quick fix

I've moved back to doing my own tracing with the Langfuse client in the interim so this isn't blocking for me. But it would be good to get the full available tracing at some point.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants