Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Additional GPU kernels added #62

Merged
merged 25 commits into from
Aug 3, 2017
Merged

Conversation

josiahsams
Copy link
Member

This PR includes the following enhancements:

  1. GPU Kernel with K-Means implementation included & example program is provided for the same.
  2. GPU kernel with Logistic Regression included & example program is provided for the same.
  3. Cached data is passed on from one logical plan to another ( Issues with GPU caching #61 ).
  4. Ownership information added to cached data to avoid proper memory cleanup ( Issues with GPU caching #61 ).

Optimization includes:

  1. API loadGpu is optimize to load the data into GPU much faster by removing multiple data copies.
  2. Multiple invocation to loadGpu will work faster if the data is already in GPU ( Issues with GPU caching #61 ).

@kmadhugit kmadhugit merged commit 2104ecd into IBMSparkGPU:GPUDataset Aug 3, 2017
@josiahsams
Copy link
Member Author

@a-agrz
Copy link

a-agrz commented Oct 5, 2017

Hi !
In "GpuKMeansFile" line 26. the Path "/src/main/resources/kmeans-samples.txt" does not exist !!

Best Regards
Abdallah

kmadhugit pushed a commit that referenced this pull request Oct 23, 2017
* Spark 2.1 support

* Sample codegen on top of DS

* working sample

* v2

* Stable jcuda codegen

* Removed println

* Removed the hardcoded size for partitions

* code cleanup

* Pinned Memory

* Pinned Memory for GPUOUT

* Array support

* Array support 1

* Array Support

* support for const array

* made logical plan to produce & consume objects

* incorporated review comments (#46)

* incorporated review comments

* include GpuDSArrayMult.scala sample file

* GPU Caching Feature Added for GpuEnabler Dataset APIs (#49)

* incorporated review comments

* enable cache prelim

* push cache setting to codegen part prelims

* cached gpuPtrs handled in codegen part prelims

* uncacheGPU for dataset included prelims

* GPU dimensions support with provision for multiple stage execution prelims

* handle constants - prelims

* code package movement

* handle constants - prelims2

* add comments - prelims2

* api's simplified

* reorder gpuoutput in the args list

* variable can be both GPUINPUT & GPUOUTPUT

* variable can be both GPUINPUT & GPUOUTPUT bug fixes

* started with testcase addition for GPU operations on Dataset

* new testcases addition

* cache bug fixes & new testcases added

* bugs related to multithreading fixed

* code comments added

* code caching added for autogenerated code

* cleaned up examples

* cleaned up examples

* Alloc host memory if the cached data is part of output

* perf issue patched

* minor mistake

* include compare perf example

* Performance & other misc patches (#50)

* guava dependency wrong version fixed

* prelim commit 1

* bug fixes

* performance bug fixes

* patch for perfDebug sample prog (#53)

* API to load data into GPU (#56)

* patch for perfDebug sample prog

* Added loadGpu API

* resolve conflicts

* Added loadGpu API

* resolve conflicts

* samples invoke loadGpu

* Performance optimization on speed and memory (#59)

* auto caching added between GPU calls

* toggle autocache gpu

* logging added

* testcase added for gpuonly cache

* optimize mem alloc

* reduce mem requirement

* handle buffer underflow

* optimize code flow

* Support for Multi Dimension as input GPU Grid dimension (#60)

* Additional GPU kernels added (#62)

* auto caching added between GPU calls

* toggle autocache gpu

* logging added

* testcase added for gpuonly cache

* optimize mem alloc

* reduce mem requirement

* optimize code flow

* sample program for logistic reg added

* kmeans partial code drop

* stabilize kmeans example

* gpu memory limit check added

* stabilize kmeans example

* GpuKMeans example read from file

* GpuKMeans example dump cluster index

* minor patches

* introduce gpuptr meta info & bug fixes

* child gpuptrs cleanup bug fixed

* GpuKMeans variants added

* optimize loadGpu

* optimize loadGpu

* optimize loadGpu

* optimize loadGpu

* modify perfDebug to remove sleep

* code cleanup

* remove duplicate testcase

* #Issue63: Support for CUDA8.0 and related jcuda libraries (#64)

* changes for jcuda 8.0

* Address performance issue in map GPU functions (#67)

* Heterogeneous Environment Support (#69)

* Heterogeneous Environment Support

* changes for Heterogeneous env and null dataset assertion

* GPUEnabler version changed to 2.0.0

* GPUEnabler version changed to 2.0.0

* Update README.md
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants