Running VStar from Graphics Card

Affiliation
American Association of Variable Star Observers (AAVSO)
Tue, 01/23/2024 - 01:42

Hello everybody,

Just a simple question. Is it possible to use a graphic card to run VStar? I wanted to work with 10e-6 numbers for enhanced period accuracy, but the program keeps on crashing. (BTW, I have an i7, 16 core, 16 GB RAM computer).

Clear skies!

Enrique Boeneker

Affiliation
Astronomical Society of South Australia (ASSAU)
Graphics card

When you say, "run VStar from a graphics card", I immediately think of: "make use of a GPU for parallelism", but I'm pretty sure that's not what you mean since you mention (real) numbers for greater period accuracy and many GPUs use single precision not double.

Can you elaborate please?

Also, when you say it keeps crashing, can you say more? Is it throwing exceptions, showing an error dialog? Does File -> Log... show anything? If so can you send the error message?

David

Affiliation
American Association of Variable Star Observers (AAVSO)
GPU processing

Hello David,

I am sorry for not explaining myself well. What I mean is precisely "make use of a GPU for parallelism" in order to increase my computing power to push VStar to manage up to 6 decimal places in frecuency/period findings.

Enrique

Affiliation
Astronomical Society of South Australia (ASSAU)
GPU

Hi Enrique

Thanks for the additional information.

As I mentioned, GPUs often use single precision not double. However, VStar's Preferences dialog allows you to change the number of decimal places shown in results. Have you looked at that? See page 108 of the user manual.

As far as using a GPU for parallelism is concerned, there are Java libraries that allow OpenCL or CUDA kernels to be written and invoked but this does not come for free since the code must be modified to incorporate such an approach, as with any parallelism/concurrency.

Does this make sense?

David