Skip to content

示例demo没跑通 #15

@JV-X

Description

@JV-X

我在我的一台服务器上配置了cozeloop,然后尝试运行sdk的demo,报错如下:

(scripts) ➜  scripts  cd /Users/jianwei/Code2025/scripts ; /usr/bin/env /usr/local/Caskroom/miniconda/base/envs/scripts/bin/python /Users/jianwei/.vscode/extensions/ms-python.debugpy-2025.10.0-darwin-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher 50883 -- /Users/jianwei/Code20
25/scripts/coze_test.py 
2025-07-30 15:43:54,550 cozeloop.internal.httpclient.http_client http_client.py:41 [ERROR] [cozeloop] Failed to parse response. Path: http://123.57.219.52:8082/v1/loop/prompts/mget, http code: 405, log id: None, error: Expecting value: line 1 column 1 (char 0).
2025-07-30 15:43:55,546 cozeloop.internal.httpclient.http_client http_client.py:41 [ERROR] [cozeloop] Failed to parse response. Path: http://123.57.219.52:8082/v1/loop/traces/ingest, http code: 405, log id: None, error: Expecting value: line 1 column 1 (char 0).
2025-07-30 15:43:55,546 cozeloop.internal.trace.exporter exporter.py:83 [ERROR] [cozeloop] export spans fail, err:[remote service error,  [http_code=405 error_code=-1 logid=None]]
2025-07-30 15:43:56,544 cozeloop.internal.httpclient.http_client http_client.py:41 [ERROR] [cozeloop] Failed to parse response. Path: http://123.57.219.52:8082/v1/loop/traces/ingest, http code: 405, log id: None, error: Expecting value: line 1 column 1 (char 0).
2025-07-30 15:43:56,544 cozeloop.internal.trace.exporter exporter.py:83 [ERROR] [cozeloop] export spans fail, err:[remote service error,  [http_code=405 error_code=-1 logid=None]]
Traceback (most recent call last):
  File "/usr/local/Caskroom/miniconda/base/envs/scripts/lib/python3.11/site-packages/cozeloop/internal/httpclient/http_client.py", line 36, in parse_response
    data = response.json()
           ^^^^^^^^^^^^^^^
  File "/usr/local/Caskroom/miniconda/base/envs/scripts/lib/python3.11/site-packages/httpx/_models.py", line 832, in json
    return jsonlib.loads(self.content, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/Caskroom/miniconda/base/envs/scripts/lib/python3.11/json/__init__.py", line 346, in loads
    return _default_decoder.decode(s)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/Caskroom/miniconda/base/envs/scripts/lib/python3.11/json/decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/Caskroom/miniconda/base/envs/scripts/lib/python3.11/json/decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/Caskroom/miniconda/base/envs/scripts/lib/python3.11/runpy.py", line 198, in _run_module_as_main
    return _run_code(code, main_globals, None,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/Caskroom/miniconda/base/envs/scripts/lib/python3.11/runpy.py", line 88, in _run_code
    exec(code, run_globals)
  File "/Users/jianwei/.vscode/extensions/ms-python.debugpy-2025.10.0-darwin-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/__main__.py", line 71, in <module>
    cli.main()
  File "/Users/jianwei/.vscode/extensions/ms-python.debugpy-2025.10.0-darwin-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 501, in main
    run()
  File "/Users/jianwei/.vscode/extensions/ms-python.debugpy-2025.10.0-darwin-x64/bundled/libs/debugpy/adapter/../../debugpy/launcher/../../debugpy/../debugpy/server/cli.py", line 351, in run_file
    runpy.run_path(target, run_name="__main__")
  File "/Users/jianwei/.vscode/extensions/ms-python.debugpy-2025.10.0-darwin-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 310, in run_path
    return _run_module_code(code, init_globals, run_name, pkg_name=pkg_name, script_name=fname)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/jianwei/.vscode/extensions/ms-python.debugpy-2025.10.0-darwin-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 127, in _run_module_code
    _run_code(code, mod_globals, init_globals, mod_name, mod_spec, pkg_name, script_name)
  File "/Users/jianwei/.vscode/extensions/ms-python.debugpy-2025.10.0-darwin-x64/bundled/libs/debugpy/_vendored/pydevd/_pydevd_bundle/pydevd_runpy.py", line 118, in _run_code
    exec(code, run_globals)
  File "/Users/jianwei/Code2025/scripts/coze_test.py", line 109, in <module>
    prompt = client.get_prompt(prompt_key="inter_cotradiction", version="0.0.1")
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/Caskroom/miniconda/base/envs/scripts/lib/python3.11/site-packages/cozeloop/_client.py", line 212, in get_prompt
    return self._prompt_provider.get_prompt(prompt_key, version)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/Caskroom/miniconda/base/envs/scripts/lib/python3.11/site-packages/cozeloop/internal/prompt/prompt.py", line 62, in get_prompt
    raise e
  File "/usr/local/Caskroom/miniconda/base/envs/scripts/lib/python3.11/site-packages/cozeloop/internal/prompt/prompt.py", line 52, in get_prompt
    prompt = self._get_prompt(prompt_key, version)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/Caskroom/miniconda/base/envs/scripts/lib/python3.11/site-packages/cozeloop/internal/prompt/prompt.py", line 77, in _get_prompt
    result = self.openapi_client.mpull_prompt(self.workspace_id, [PromptQuery(prompt_key=prompt_key, version=version)])
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/Caskroom/miniconda/base/envs/scripts/lib/python3.11/site-packages/cozeloop/internal/prompt/openapi.py", line 126, in mpull_prompt
    batch_results = self._do_mpull_prompt(workspace_id, sorted_queries)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/Caskroom/miniconda/base/envs/scripts/lib/python3.11/site-packages/cozeloop/internal/prompt/openapi.py", line 144, in _do_mpull_prompt
    response = self.http_client.post(MPULL_PROMPT_PATH, MPullPromptResponse, request)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/Caskroom/miniconda/base/envs/scripts/lib/python3.11/site-packages/cozeloop/internal/httpclient/client.py", line 108, in post
    return self.request(path, "POST", response_model, json=json,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/Caskroom/miniconda/base/envs/scripts/lib/python3.11/site-packages/cozeloop/internal/httpclient/client.py", line 91, in request
    return parse_response(url, response, response_model)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/Caskroom/miniconda/base/envs/scripts/lib/python3.11/site-packages/cozeloop/internal/httpclient/http_client.py", line 42, in parse_response
    raise consts.RemoteServiceError(http_code, -1, "", log_id) from e
cozeloop.internal.consts.error.RemoteServiceError: remote service error,  [http_code=405 error_code=-1 logid=None]

我注意到我的coze罗盘的左侧菜单栏只有很少几个项目,而我在 https://loop.coze.cn 的边栏有很多项目,我自己的少了很多项目比如“SDK&API”, 我不确定是不是这个原因:

Image

并且,我的代码里的 COZELOOP_API_TOKEN 是从用户-账户设置-授权里生成的。而不是在左侧菜单栏选择 SDK&API > 授权。(因为我的左侧菜单栏没有SDK&API这一项),所以我不确定对不对

最后附上我的代码:

# Copyright (c) 2025 Bytedance Ltd. and/or its affiliates
# SPDX-License-Identifier: MIT

import json
import time
from typing import List
import os

import cozeloop
from cozeloop import Message
from cozeloop.entities.prompt import Role
from cozeloop.spec.tracespec import CALL_OPTIONS, ModelCallOption, ModelMessage, ModelInput


def convert_model_input(messages: List[Message]) -> ModelInput:
    model_messages = []
    for message in messages:
        model_messages.append(ModelMessage(
            role=str(message.role),
            content=message.content if message.content is not None else ""
        ))

    return ModelInput(
        messages=model_messages
    )


class LLMRunner:
    def __init__(self, client):
        self.client = client

    def llm_call(self, input_data):
        """
        Simulate an LLM call and set relevant span tags.
        """
        span = self.client.start_span("llmCall", "model")
        try:
            # Assuming llm is processing
            # output = ChatOpenAI().invoke(input=input_data)

            # mock resp
            time.sleep(1)
            output = "I'm a robot. I don't have a specific name. You can give me one."
            input_token = 232
            output_token = 1211

            # set tag key: `input`
            span.set_input(convert_model_input(input_data))
            # set tag key: `output`
            span.set_output(output)
            # set tag key: `model_provider`, e.g., openai, etc.
            span.set_model_provider("openai")
            # set tag key: `start_time_first_resp`
            # Timestamp of the first packet return from LLM, unit: microseconds.
            # When `start_time_first_resp` is set, a tag named `latency_first_resp` calculated
            # based on the span's StartTime will be added, meaning the latency for the first packet.
            span.set_start_time_first_resp(int(time.time() * 1000000))
            # set tag key: `input_tokens`. The amount of input tokens.
            # when the `input_tokens` value is set, it will automatically sum with the `output_tokens` to calculate the `tokens` tag.
            span.set_input_tokens(input_token)
            # set tag key: `output_tokens`. The amount of output tokens.
            # when the `output_tokens` value is set, it will automatically sum with the `input_tokens` to calculate the `tokens` tag.
            span.set_output_tokens(output_token)
            # set tag key: `model_name`, e.g., gpt-4-1106-preview, etc.
            span.set_model_name("gpt-4-1106-preview")
            span.set_tags({CALL_OPTIONS: ModelCallOption(
                temperature=0.5,
                top_p=0.5,
                top_k=10,
                presence_penalty=0.5,
                frequency_penalty=0.5,
                max_tokens=1024,
            )})

            return None
        except Exception as e:
            raise e
        finally:
            span.finish()


if __name__ == '__main__':
    # 1.Create a prompt on the platform
    # You can create a Prompt on the platform's Prompt development page (set Prompt Key to 'prompt_hub_demo'),
    # add the following messages to the template, and submit a version.
    # System: You are a helpful bot, the conversation topic is {{var1}}.
    # Placeholder: placeholder1
    # User: My question is {{var2}}
    # Placeholder: placeholder2

    # Set the following environment variables first.
    # COZELOOP_WORKSPACE_ID=your workspace id
    # COZELOOP_API_TOKEN=your token
    # 2.New loop client
    os.environ["COZELOOP_WORKSPACE_ID"] = "myworkspaceid"
    os.environ["COZELOOP_API_TOKEN"] = "mytoken"
    os.environ["COZELOOP_API_BASE_URL"] = "myipaddress"
    
    client = cozeloop.new_client(
        # Set whether to report a trace span when get or format prompt.
        # Default value is false.
        prompt_trace=True)

    # 3. new root span
    rootSpan = client.start_span("root_span", "main_span")

    # 4. Get the prompt
    # If no specific version is specified, the latest version of the corresponding prompt will be obtained
    prompt = client.get_prompt(prompt_key="inter_cotradiction", version="0.0.1")
    print('prompt', prompt)
    if prompt is not None:
        # Get messages of the prompt
        if prompt.prompt_template is not None:
            messages = prompt.prompt_template.messages
            print(
                f"prompt messages: {json.dumps([message.model_dump(exclude_none=True) for message in messages], ensure_ascii=False)}")
        # Get llm config of the prompt
        if prompt.llm_config is not None:
            llm_config = prompt.llm_config
            print(f"prompt llm_config: {llm_config.model_dump_json(exclude_none=True)}")

        # 5.Format messages of the prompt
        formatted_messages = client.prompt_format(prompt, {
            # Normal variable type should be string
            "user_input": "this is user input!!!!!",
            # Placeholder variable type should be Message/List[Message]
            "placeholder1": [Message(role=Role.USER, content="Hello!"),
                             Message(role=Role.ASSISTANT, content="Hello!")]
            # Other variables in the prompt template that are not provided with corresponding values will be
            # considered as empty values.
        })
        print(
            f"formatted_messages: {json.dumps([message.model_dump(exclude_none=True) for message in formatted_messages], ensure_ascii=False)}")

        # 6.LLM call
        llm_runner = LLMRunner(client)
        # llm_runner.llm_call(formatted_messages)

    rootSpan.finish()
    # 4. (optional) flush or close
    # -- force flush, report all traces in the queue
    # Warning! In general, this method is not needed to be call, as spans will be automatically reported in batches.
    # Note that flush will block and wait for the report to complete, and it may cause frequent reporting,
    # affecting performance.
    client.flush()

请帮我看一下,感谢。 orz

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions