vllm.entrypoints.openai.api_server ¶
build_async_engine_client_from_engine_args async ¶
build_async_engine_client_from_engine_args(
engine_args: AsyncEngineArgs,
*,
usage_context: UsageContext = OPENAI_API_SERVER,
disable_frontend_multiprocessing: bool = False,
client_config: dict[str, Any] | None = None,
) -> AsyncIterator[EngineClient]
Create EngineClient, either: - in-process using the AsyncLLMEngine Directly - multiprocess using AsyncLLMEngine RPC
Returns the Client or None if the creation failed.
Source code in vllm/entrypoints/openai/api_server.py
run_server async ¶
Run a single-worker API server.
Source code in vllm/entrypoints/openai/api_server.py
run_server_worker async ¶
Run a single API server worker.
Source code in vllm/entrypoints/openai/api_server.py
setup_server ¶
Validate API server args, set up signal handler, create socket ready to serve.