-
Notifications
You must be signed in to change notification settings - Fork 57
Commit
Cross-repo references using an URL that only has the folder name cause links in the github.io site to go to the github.com site. Most of these uses want to link to the documentation (README.md) so add that where appropriate so links in the github.io site stay there. Now that we've got those links fixed, we can reduce the complexity in the fix-github-md-refs script that was trying to add a README.md to folder names. Also added an area to keep test files for future work. Also updates the charter.md to indicate we've moved TSC information into the docs folder in the community/TSC.rst file. Signed-off-by: David B. Kinder <[email protected]>
- Loading branch information
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,8 @@ | ||
# Kubernetes Installation Options | ||
|
||
Here are a variety of ways to install Kubernetes: | ||
|
||
* [Using AWS EKS Cluster](k8s_instal_aws_eks.md) | ||
* [Using kubeadm](k8s_install_kubeadm.md) | ||
* [Using Kubespray](k8s_install_kubespray.md) | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -5,4 +5,6 @@ | |
*.rst | ||
*.md | ||
*.rst | ||
*.txt | ||
CODEOWNERS | ||
LICENSE |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,35 @@ | ||
# Test markdown file with cross-repo links | ||
|
||
This folder contains a collection of Kubernetes manifest files for deploying the ChatQnA service across scalable nodes. It includes a comprehensive [benchmarking tool](/GenAIEval/evals/benchmark/README.md) that enables throughput analysis to assess inference performance. | ||
This comment has been minimized.
Sorry, something went wrong.
This comment has been minimized.
Sorry, something went wrong.
dbkinder
Author
Contributor
|
||
|
||
We have created the [BKC manifest](https://github.com/opea-project/GenAIExamples/tree/main/ChatQnA/benchmark) for single node, two nodes and four nodes K8s cluster. In order to apply, we need to check out and configure some values. | ||
|
||
The test uses the [benchmark tool](https://github.com/opea-project/GenAIEval/tree/main/evals/benchmark) to do performance test. We need to set up benchmark tool at the master node of Kubernetes which is k8s-master. | ||
|
||
This document outlines the deployment process for a CodeGen application utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline on Intel Gaudi2 server. The steps include Docker images creation, container deployment via Docker Compose, and service execution to integrate microservices such as `llm`. We will publish the Docker images to the Docker Hub soon, further simplifying the deployment process for this service. | ||
|
||
Install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector#readme). We will soon publish images to Docker Hub, at which point no builds will be required, further simplifying install. | ||
|
||
If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/CodeGen/docker_compose/intel/cpu/xeon#validate-microservices) first. | ||
|
||
Update Knowledge Base via Local File [nke-10k-2023.pdf](https://github.com/opea-project/GenAIComps/blob/main/comps/retrievers/redis/data/nke-10k-2023.pdf) | ||
|
||
Please refer to [Xeon README](/GenAIExamples/AudioQnA/docker_compose/intel/cpu/xeon/README.md) or [Gaudi README](/GenAIExamples/AudioQnA/docker_compose/intel/hpu/gaudi/README.md) to build the OPEA images. These too will be available on Docker Hub soon to simplify use. | ||
|
||
Here's a [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/reranks/tei/Dockerfile) to a Docker file. | ||
|
||
You can take a look at the tools yaml and python files in this example. For more details, please refer to the "Provide your own tools" section in the instructions [here](https://github.com/opea-project/GenAIComps/tree/main/comps/agent/langchain#5-customize-agent-strategy). | ||
|
||
Here's another [Link](https://github.com/opea-project/GenAIExamples/blob/main/ChatQnA/ui/docker/Dockerfile.react) to examine. | ||
|
||
Here is a nice one [Docker Xeon README](/GenAIExamples/DocSum/docker_compose/intel/cpu/xeon/README.md) and that with a section reference [Docker Xeon README](/GenAIExamples/DocSum/docker_compose/intel/cpu/xeon/README.md#section) | ||
|
||
And a reference to a python file [finetune_config](https://github.com/opea-project/GenAIComps/blob/main/comps/finetuning/finetune_config.py) to keep things interesting. | ||
|
||
Here's an [issue](https://github.com/opea-project/GenAIExamples/issues/763) | ||
reference and | ||
[Actions](https://github.com/opea-project/GenAIExamples/actions) reference too. | ||
Might as well test [PRs](https://github.com/opea-project/GenAIExamples/pulls) | ||
and [Projects](https://github.com/opea-project/GenAIExamples/projects) too. | ||
|
||
In release notes will find [88b3c1](https://github.com/opea-project/GenAIInfra/commit/88b3c108e5b5e3bfb6d9346ce2863b69f70cc2f1) commit references. |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,35 @@ | ||
# Test markdown file with cross-repo links | ||
|
||
This folder contains a collection of Kubernetes manifest files for deploying the ChatQnA service across scalable nodes. It includes a comprehensive [benchmarking tool](https://github.com/opea-project/GenAIEval/blob/main/evals/benchmark/README.md) that enables throughput analysis to assess inference performance. | ||
|
||
We have created the [BKC manifest](https://github.com/opea-project/GenAIExamples/tree/main/ChatQnA/benchmark) for single node, two nodes and four nodes K8s cluster. In order to apply, we need to check out and configure some values. | ||
|
||
The test uses the [benchmark tool](https://github.com/opea-project/GenAIEval/tree/main/evals/benchmark) to do performance test. We need to set up benchmark tool at the master node of Kubernetes which is k8s-master. | ||
|
||
This document outlines the deployment process for a CodeGen application utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline on Intel Gaudi2 server. The steps include Docker images creation, container deployment via Docker Compose, and service execution to integrate microservices such as `llm`. We will publish the Docker images to the Docker Hub soon, further simplifying the deployment process for this service. | ||
|
||
Install GMC in your Kubernetes cluster, if you have not already done so, by following the steps in Section "Getting Started" at [GMC Install](https://github.com/opea-project/GenAIInfra/tree/main/microservices-connector#readme). We will soon publish images to Docker Hub, at which point no builds will be required, further simplifying install. | ||
|
||
If you get errors like "Access Denied", [validate micro service](https://github.com/opea-project/GenAIExamples/tree/main/CodeGen/docker_compose/intel/cpu/xeon#validate-microservices) first. | ||
|
||
Update Knowledge Base via Local File [nke-10k-2023.pdf](https://github.com/opea-project/GenAIComps/blob/main/comps/retrievers/redis/data/nke-10k-2023.pdf) | ||
|
||
Please refer to [Xeon README](https://github.com/opea-project/GenAIExamples/blob/main/AudioQnA/docker_compose/intel/cpu/xeon/README.md) or [Gaudi README](https://github.com/opea-project/GenAIExamples/blob/main/AudioQnA/docker_compose/intel/hpu/gaudi/README.md) to build the OPEA images. These too will be available on Docker Hub soon to simplify use. | ||
|
||
Here's a [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/reranks/tei/Dockerfile) to a Docker file. | ||
|
||
You can take a look at the tools yaml and python files in this example. For more details, please refer to the "Provide your own tools" section in the instructions [here](https://github.com/opea-project/GenAIComps/tree/main/comps/agent/langchain#5-customize-agent-strategy). | ||
|
||
Here's another [Link](https://github.com/opea-project/GenAIExamples/blob/main/ChatQnA/ui/docker/Dockerfile.react) to examine. | ||
|
||
Here is a nice one [Docker Xeon README](https://github.com/opea-project/GenAIExamples/blob/main/DocSum/docker_compose/intel/cpu/xeon/README.md) and that with a section reference [Docker Xeon README](https://github.com/opea-project/GenAIExamples/blob/main/DocSum/docker_compose/intel/cpu/xeon/README.md#section) | ||
|
||
And a reference to a python file [finetune_config](https://github.com/opea-project/GenAIComps/blob/main/comps/finetuning/finetune_config.py) to keep things interesting. | ||
|
||
Here's an [issue](https://github.com/opea-project/GenAIExamples/issues/763) | ||
reference and | ||
[Actions](https://github.com/opea-project/GenAIExamples/actions) reference too. | ||
Might as well test [PRs](https://github.com/opea-project/GenAIExamples/pulls) | ||
and [Projects](https://github.com/opea-project/GenAIExamples/projects) too. | ||
|
||
In release notes will find [88b3c1](https://github.com/opea-project/GenAIInfra/commit/88b3c108e5b5e3bfb6d9346ce2863b69f70cc2f1) commit references. |
This doesn't seem like the correct link.