IPSDK  4_1_0_2
IPSDK : Image Processing Software Development Kit
Macros
CudaAlgorithmSrcMacros.h File Reference

Source part of macros set for image processing algorithm class for Cuda implementation. More...

#include <IPSDKImageProcessing/Algorithm/ImageProcessingAlgorithmSrcMacrosTypes.h>

Go to the source code of this file.

Macros

#define IPSDKCUDA_CHECK_IMAGE_ON_GPU(image)
 Test if image is stored on GPU. More...
 
#define IPSDKCUDA_APPLY_CHECK_IMAGES_ON_GPU(r, data, IMAGE)   IPSDKCUDA_CHECK_IMAGE_ON_GPU(IMAGE)
 IPSDKCUDA_APPLY_CHECK_IMAGES_ON_GPU on IMAGE. More...
 
#define IPSDKGPU_CHECK_IMAGES_ON_GPU(IMAGES)   BOOST_PP_SEQ_FOR_EACH(IPSDKCUDA_APPLY_CHECK_IMAGES_ON_GPU, _, IMAGES)
 Parse the iamge list to check if they are stored on GPU.
 
#define IPSDKCUDA_TRY_TO_LAUNCH_PROCESS_ON_GPU()
 Calls the method tryToLaunchGpuProcess() If the calculation can be performed on GPU, the macro makes the retrieveProvider method return eRetrievalResultType::eRRT_NoMore to make IPSDK switch to classical CPU calculation. More...
 
#define IPSDKCUDA_CHECK_PROCESS_CAN_BE_LAUNCHED_ON_GPU()
 Checks if the process can be launched on GPU. More...
 
#define IPSDKCUDA_START_IMPLEMENT_LAUNCH_GPU_PROCESS(AlgoName)
 Implements the beginning of the body of the method tryToLaunchGpuProcess() in order to avoid code duplication. The macro parse all algorithm attributes and check if all the attribute images are loaded on GPU. If this condition is respected, The GPU lvl2 is dispatcher is created so that the user only have to initialize it. Otherwise, the function notify the retrieveProvider method that the process must be done on CPU. More...
 
#define IPSDKCUDA_END_IMPLEMENT_LAUNCH_GPU_PROCESS()
 Implements the end of the body of the method tryToLaunchGpuProcess() in order to avoid code duplication. The macro checks the result of the dispathcer initialization and returns the appropriate result to manage CPU or GPU Lvl2 calculation. More...
 
#define IPSDKCUDA_APPLY_CHECK_IMAGES_ON_GPU(r, data, IMAGE)   IPSDKCUDA_CHECK_IMAGE_ON_GPU(IMAGE)
 IPSDKCUDA_APPLY_CHECK_IMAGES_ON_GPU on IMAGE. More...
 
#define IPSDKCUDA_CHECK_IMAGES_ON_GPU(IMAGES)   BOOST_PP_SEQ_FOR_EACH(IPSDKCUDA_APPLY_CHECK_IMAGES_ON_GPU, _, IMAGES)
 Parse the iamge list to check if they are stored on GPU.
 

Detailed Description

Source part of macros set for image processing algorithm class for Cuda implementation.

Author
R. Abbal
Date
2022/08/03

Macro Definition Documentation

◆ IPSDKCUDA_CHECK_IMAGE_ON_GPU

#define IPSDKCUDA_CHECK_IMAGE_ON_GPU (   image)
Value:
if (!image->getImagePtr()->isGpuImage())\
return RetrievalResult(eRetrievalResultType::eRRT_Failed, image->getImagePtr()->getName() + " is not a GPU image");
ProcessingResult< eRetrievalResultType > RetrievalResult
type used to retrieve an execution result
Definition: AsyncProcessorTypes.h:47
Failed to retrieve provider.
Definition: AsyncProcessorTypes.h:39

Test if image is stored on GPU.

◆ IPSDKCUDA_APPLY_CHECK_IMAGES_ON_GPU [1/2]

#define IPSDKCUDA_APPLY_CHECK_IMAGES_ON_GPU (   r,
  data,
  IMAGE 
)    IPSDKCUDA_CHECK_IMAGE_ON_GPU(IMAGE)

IPSDKCUDA_APPLY_CHECK_IMAGES_ON_GPU on IMAGE.

IPSDKCUDA_CHECK_IMAGE_ON_GPU on IMAGE.

◆ IPSDKCUDA_TRY_TO_LAUNCH_PROCESS_ON_GPU

#define IPSDKCUDA_TRY_TO_LAUNCH_PROCESS_ON_GPU ( )
Value:
RetrievalResult res = tryToLaunchGpuProcess(priority, pProvider); \
if (res.getResult() != eRetrievalResultType::eRRT_NoMore) \
return res; \
ProcessingResult< eRetrievalResultType > RetrievalResult
type used to retrieve an execution result
Definition: AsyncProcessorTypes.h:47
No more providers to retrieve, end of processing reached.
Definition: AsyncProcessorTypes.h:37

Calls the method tryToLaunchGpuProcess() If the calculation can be performed on GPU, the macro makes the retrieveProvider method return eRetrievalResultType::eRRT_NoMore to make IPSDK switch to classical CPU calculation.

◆ IPSDKCUDA_CHECK_PROCESS_CAN_BE_LAUNCHED_ON_GPU

#define IPSDKCUDA_CHECK_PROCESS_CAN_BE_LAUNCHED_ON_GPU ( )
Value:
if (bGpuCalculation) { \
const std::vector<std::string>& vAttributeNames = getAttributeNameColl(); \
for (const std::string& attributeName : vAttributeNames) { \
const BaseAttribute& curAttribute = getAttribute(attributeName); \
if (curAttribute.isInit() && curAttribute.getAttributeType() == eAttributeType::eAT_ImageProcessing) { \
const BaseImageAttribute* pImageAttribute = static_cast<const ipsdk::imaproc::BaseImageAttribute*>(&curAttribute); \
if (pImageAttribute->getImageProcessingAttributeType() == eImageProcessingAttributeType::eIPAT_Image) { \
const ipsdk::image::BaseImage& gpuImage = static_cast<const ipsdk::image::BaseImage&>(pImageAttribute->getImage()); \
bGpuCalculation = bGpuCalculation && gpuImage.isGpuImage(); \
} \
} \
} \
} \
Base class for image attributes.
Definition: BaseImageAttribute.h:32
IPSDKCORE_API bool isGpuSupportActivated()
Returns true if the GPU support is activated.
Attribute associated to image processing elements.
Definition: AttributeTypes.h:42
bool ipBool
Base types definition.
Definition: BaseTypes.h:47
Base class for images data type.
Definition: BaseImage.h:43
Attribute associated to an image.
Definition: ImageProcessingAttributeTypes.h:43
virtual ipBool isGpuImage() const =0
returns true if the image is loaded on GPU

Checks if the process can be launched on GPU.

◆ IPSDKCUDA_START_IMPLEMENT_LAUNCH_GPU_PROCESS

#define IPSDKCUDA_START_IMPLEMENT_LAUNCH_GPU_PROCESS (   AlgoName)
Value:
if (bGpuCalculation) { \
typedef StaticProcessorDispatcher<AlgoName> ProcessorDispatcher; \
boost::shared_ptr<ProcessorDispatcher> pProcessorDispatcher(boost::make_shared<ProcessorDispatcher>());
#define IPSDKCUDA_CHECK_PROCESS_CAN_BE_LAUNCHED_ON_GPU()
Checks if the process can be launched on GPU.
Definition: CudaAlgorithmSrcMacros.h:60

Implements the beginning of the body of the method tryToLaunchGpuProcess() in order to avoid code duplication. The macro parse all algorithm attributes and check if all the attribute images are loaded on GPU. If this condition is respected, The GPU lvl2 is dispatcher is created so that the user only have to initialize it. Otherwise, the function notify the retrieveProvider method that the process must be done on CPU.

Note
This macro, with IPSDKCUDA_END_IMPLEMENT_LAUNCH_GPU_PROCESS() simplify management of the GPU switch by letting the user only specify the dispatcher initialization

◆ IPSDKCUDA_END_IMPLEMENT_LAUNCH_GPU_PROCESS

#define IPSDKCUDA_END_IMPLEMENT_LAUNCH_GPU_PROCESS ( )
Value:
if (bInitRes.getResult() == true) { \
pProvider = pProcessorDispatcher; \
} \
else \
return RetrievalResult(eRetrievalResultType::eRRT_Failed, bInitRes.getMsg()); \
} \
ProcessingResult< eRetrievalResultType > RetrievalResult
type used to retrieve an execution result
Definition: AsyncProcessorTypes.h:47
Failed to retrieve provider.
Definition: AsyncProcessorTypes.h:39
No more providers to retrieve, end of processing reached.
Definition: AsyncProcessorTypes.h:37
Provider successfully retrieved.
Definition: AsyncProcessorTypes.h:35

Implements the end of the body of the method tryToLaunchGpuProcess() in order to avoid code duplication. The macro checks the result of the dispathcer initialization and returns the appropriate result to manage CPU or GPU Lvl2 calculation.

Note
This macro, with IPSDKCUDA_START_IMPLEMENT_LAUNCH_GPU_PROCESS() simplify management of the GPU switch by letting the user only specify the dispatcher initialization

◆ IPSDKCUDA_APPLY_CHECK_IMAGES_ON_GPU [2/2]

#define IPSDKCUDA_APPLY_CHECK_IMAGES_ON_GPU (   r,
  data,
  IMAGE 
)    IPSDKCUDA_CHECK_IMAGE_ON_GPU(IMAGE)

IPSDKCUDA_APPLY_CHECK_IMAGES_ON_GPU on IMAGE.

IPSDKCUDA_CHECK_IMAGE_ON_GPU on IMAGE.