All Downloads are FREE. Search and download functionalities are using the official Maven repository.

org.lwjgl.util.vma.Vma Maven / Gradle / Ivy

There is a newer version: 3.3.4
Show newest version
/*
 * Copyright LWJGL. All rights reserved.
 * License terms: https://www.lwjgl.org/license
 * MACHINE GENERATED FILE, DO NOT EDIT
 */
package org.lwjgl.util.vma;

import javax.annotation.*;

import java.nio.*;

import org.lwjgl.*;

import org.lwjgl.system.*;

import static org.lwjgl.system.Checks.*;
import static org.lwjgl.system.MemoryStack.*;
import static org.lwjgl.system.MemoryUtil.*;

import org.lwjgl.vulkan.*;

/**
 * The Vulkan Memory Allocator.
 * 
 * 

Quick start

* *

Initialization

* *

At program startup:

* *
    *
  1. Initialize Vulkan to have {@code VkPhysicalDevice} and {@code VkDevice} object.
  2. *
  3. Fill VmaAllocatorCreateInfo structure and create `VmaAllocator` object by calling vmaCreateAllocator().
  4. *
* *

 * VmaAllocatorCreateInfo allocatorInfo = {};
 * allocatorInfo.physicalDevice = physicalDevice;
 * allocatorInfo.device = device;
 * 
 * VmaAllocator allocator;
 * vmaCreateAllocator(&allocatorInfo, &allocator);
* *

Resource allocation

* *

When you want to create a buffer or image:

* *
    *
  1. Fill {@code VkBufferCreateInfo} / {@code VkImageCreateInfo} structure.
  2. *
  3. Fill {@code VmaAllocationCreateInfo} structure.
  4. *
  5. Call {@link #vmaCreateBuffer CreateBuffer} / {@link #vmaCreateImage CreateImage} to get {@code VkBuffer}/{@code VkImage} with memory already allocated and bound to it.
  6. *
* *

 * VkBufferCreateInfo bufferInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
 * bufferInfo.size = 65536;
 * bufferInfo.usage = VK_BUFFER_USAGE_VERTEX_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
 * 
 * VmaAllocationCreateInfo allocInfo = {};
 * allocInfo.usage = VMA_MEMORY_USAGE_GPU_ONLY;
 * 
 * VkBuffer buffer;
 * VmaAllocation allocation;
 * vmaCreateBuffer(allocator, &bufferInfo, &allocInfo, &buffer, &allocation, nullptr);
* *

Don't forget to destroy your objects when no longer needed:

* *

 * vmaDestroyBuffer(allocator, buffer, allocation);
 * vmaDestroyAllocator(allocator);
* *

Choosing memory type

* *

Physical devices in Vulkan support various combinations of memory heaps and types. Help with choosing correct and optimal memory type for your specific * resource is one of the key features of this library. You can use it by filling appropriate members of VmaAllocationCreateInfo structure, as described * below. You can also combine multiple methods.

* *
    *
  1. If you just want to find memory type index that meets your requirements, you can use function {@link #vmaFindMemoryTypeIndex FindMemoryTypeIndex}.
  2. *
  3. If you want to allocate a region of device memory without association with any specific image or buffer, you can use function {@link #vmaAllocateMemory AllocateMemory}. * Usage of this function is not recommended and usually not needed.
  4. *
  5. If you already have a buffer or an image created, you want to allocate memory for it and then you will bind it yourself, you can use function * {@link #vmaAllocateMemoryForBuffer AllocateMemoryForBuffer}, {@link #vmaAllocateMemoryForImage AllocateMemoryForImage}. For binding you should use functions: {@link #vmaBindBufferMemory BindBufferMemory}, {@link #vmaBindImageMemory BindImageMemory}.
  6. *
  7. If you want to create a buffer or an image, allocate memory for it and bind them together, all in one call, you can use function {@link #vmaCreateBuffer CreateBuffer}, * {@link #vmaCreateImage CreateImage}. This is the recommended way to use this library.
  8. *
* *

When using 3. or 4., the library internally queries Vulkan for memory types supported for that buffer or image (function * {@code vkGetBufferMemoryRequirements()}) and uses only one of these types.

* *

If no memory type can be found that meets all the requirements, these functions return {@code VK_ERROR_FEATURE_NOT_PRESENT}.

* *

You can leave {@link VmaAllocationCreateInfo} structure completely filled with zeros. It means no requirements are specified for memory type. It is valid, * although not very useful.

* *

Usage

* *

The easiest way to specify memory requirements is to fill member {@link VmaAllocationCreateInfo}{@code ::usage} using one of the values of enum * {@code VmaMemoryUsage}. It defines high level, common usage types. For more details, see description of this enum.

* *

For example, if you want to create a uniform buffer that will be filled using transfer only once or infrequently and used for rendering every frame, * you can do it using following code:

* *

 * VkBufferCreateInfo bufferInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
 * bufferInfo.size = 65536;
 * bufferInfo.usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
 * 
 * VmaAllocationCreateInfo allocInfo = {};
 * allocInfo.usage = VMA_MEMORY_USAGE_GPU_ONLY;
 * 
 * VkBuffer buffer;
 * VmaAllocation allocation;
 * vmaCreateBuffer(allocator, &bufferInfo, &allocInfo, &buffer, &allocation, nullptr);
* *

Required and preferred flags

* *

You can specify more detailed requirements by filling members {@link VmaAllocationCreateInfo}{@code ::requiredFlags} and * {@link VmaAllocationCreateInfo}{@code ::preferredFlags} with a combination of bits from enum {@code VkMemoryPropertyFlags}. For example, if you want to * create a buffer that will be persistently mapped on host (so it must be {@code HOST_VISIBLE}) and preferably will also be {@code HOST_COHERENT} and * {@code HOST_CACHED}, use following code:

* *

 * VmaAllocationCreateInfo allocInfo = {};
 * allocInfo.requiredFlags = VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT;
 * allocInfo.preferredFlags = VK_MEMORY_PROPERTY_HOST_COHERENT_BIT | VK_MEMORY_PROPERTY_HOST_CACHED_BIT;
 * allocInfo.flags = VMA_ALLOCATION_CREATE_MAPPED_BIT;
 * 
 * VkBuffer buffer;
 * VmaAllocation allocation;
 * vmaCreateBuffer(allocator, &bufferInfo, &allocInfo, &buffer, &allocation, nullptr);
* *

A memory type is chosen that has all the required flags and as many preferred flags set as possible.

* *

If you use {@link VmaAllocationCreateInfo}{@code ::usage}, it is just internally converted to a set of required and preferred flags.

* *

Explicit memory types

* *

If you inspected memory types available on the physical device and you have a preference for memory types that you want to use, you can fill member * {@link VmaAllocationCreateInfo}{@code ::memoryTypeBits}. It is a bit mask, where each bit set means that a memory type with that index is allowed to be used * for the allocation. Special value 0, just like {@code UINT32_MAX}, means there are no restrictions to memory type index.

* *

Please note that this member is NOT just a memory type index. Still you can use it to choose just one, specific memory type. For example, if you * already determined that your buffer should be created in memory type 2, use following code:

* *

 * uint32_t memoryTypeIndex = 2;
 * 
 * VmaAllocationCreateInfo allocInfo = {};
 * allocInfo.memoryTypeBits = 1u << memoryTypeIndex;
 * 
 * VkBuffer buffer;
 * VmaAllocation allocation;
 * vmaCreateBuffer(allocator, &bufferInfo, &allocInfo, &buffer, &allocation, nullptr);
* *

Custom memory pools

* *

If you allocate from custom memory pool, all the ways of specifying memory requirements described above are not applicable and the aforementioned * members of {@link VmaAllocationCreateInfo} structure are ignored. Memory type is selected explicitly when creating the pool and then used to make all the * allocations from that pool. For further details, see Custom Memory Pools below.

* *

Memory mapping

* *

To "map memory" in Vulkan means to obtain a CPU pointer to {@code VkDeviceMemory}, to be able to read from it or write to it in CPU code. Mapping is * possible only of memory allocated from a memory type that has {@code VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT} flag. Functions {@code vkMapMemory()}, * {@code vkUnmapMemory()} are designed for this purpose. You can use them directly with memory allocated by this library, but it is not recommended * because of following issue: Mapping the same {@code VkDeviceMemory} block multiple times is illegal - only one mapping at a time is allowed. This * includes mapping disjoint regions. Mapping is not reference-counted internally by Vulkan. Because of this, Vulkan Memory Allocator provides following * facilities:

* *

Mapping functions

* *

The library provides following functions for mapping of a specific {@code VmaAllocation}: {@link #vmaMapMemory MapMemory}, {@link #vmaUnmapMemory UnmapMemory}. They are safer and more * convenient to use than standard Vulkan functions. You can map an allocation multiple times simultaneously - mapping is reference-counted internally. * You can also map different allocations simultaneously regardless of whether they use the same {@code VkDeviceMemory} block. They way it's implemented * is that the library always maps entire memory block, not just region of the allocation. For further details, see description of {@link #vmaMapMemory MapMemory} function. * Example:

* *

 * // Having these objects initialized:
 * 
 * struct ConstantBuffer
 * {
 *     ...
 * };
 * ConstantBuffer constantBufferData;
 * 
 * VmaAllocator allocator;
 * VmaBuffer constantBuffer;
 * VmaAllocation constantBufferAllocation;
 * 
 * // You can map and fill your buffer using following code:
 * 
 * void* mappedData;
 * vmaMapMemory(allocator, constantBufferAllocation, &mappedData);
 * memcpy(mappedData, &constantBufferData, sizeof(constantBufferData));
 * vmaUnmapMemory(allocator, constantBufferAllocation);
* *

Persistently mapped memory

* *

Kepping your memory persistently mapped is generally OK in Vulkan. You don't need to unmap it before using its data on the GPU. The library provides a * special feature designed for that: Allocations made with {@link #VMA_ALLOCATION_CREATE_MAPPED_BIT ALLOCATION_CREATE_MAPPED_BIT} flag set in {@link VmaAllocationCreateInfo}{@code ::flags} stay mapped * all the time, so you can just access CPU pointer to it any time without a need to call any "map" or "unmap" function. Example:

* *

 * VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
 * bufCreateInfo.size = sizeof(ConstantBuffer);
 * bufCreateInfo.usage = VK_BUFFER_USAGE_TRANSFER_SRC_BIT;
 * 
 * VmaAllocationCreateInfo allocCreateInfo = {};
 * allocCreateInfo.usage = VMA_MEMORY_USAGE_CPU_ONLY;
 * allocCreateInfo.flags = VMA_ALLOCATION_CREATE_MAPPED_BIT;
 * 
 * VkBuffer buf;
 * VmaAllocation alloc;
 * VmaAllocationInfo allocInfo;
 * vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, &allocInfo);
 * 
 * // Buffer is already mapped. You can access its memory.
 * memcpy(allocInfo.pMappedData, &constantBufferData, sizeof(constantBufferData));
* *

There are some exceptions though, when you should consider mapping memory only for a short period of time:

* *
    *
  • When operating system is Windows 7 or 8.x (Windows 10 is not affected because it uses WDDM2), device is discrete AMD GPU, and memory type is the * special 256 MiB pool of {@code DEVICE_LOCAL + HOST_VISIBLE} memory (selected when you use {@link #VMA_MEMORY_USAGE_CPU_TO_GPU MEMORY_USAGE_CPU_TO_GPU}), then whenever a memory block * allocated from this memory type stays mapped for the time of any call to {@code vkQueueSubmit()} or {@code vkQueuePresentKHR()}, this block is * migrated by WDDM to system RAM, which degrades performance. It doesn't matter if that particular memory block is actually used by the command * buffer being submitted.
  • *
  • Keeping many large memory blocks mapped may impact performance or stability of some debugging tools.
  • *
* *

Cache control

* *

Memory in Vulkan doesn't need to be unmapped before using it on GPU, but unless a memory types has {@code VK_MEMORY_PROPERTY_HOST_COHERENT_BIT} flag * set, you need to manually invalidate cache before reading of mapped pointer using function {@code vkInvalidateMappedMemoryRanges()} and flush cache * after writing to mapped pointer using function {@code vkFlushMappedMemoryRanges()}. Example:

* *

 * memcpy(allocInfo.pMappedData, &constantBufferData, sizeof(constantBufferData));
 * 
 * VkMemoryPropertyFlags memFlags;
 * vmaGetMemoryTypeProperties(allocator, allocInfo.memoryType, &memFlags);
 * if((memFlags & VK_MEMORY_PROPERTY_HOST_COHERENT_BIT) == 0)
 * {
 *     VkMappedMemoryRange memRange = { VK_STRUCTURE_TYPE_MAPPED_MEMORY_RANGE };
 *     memRange.memory = allocInfo.deviceMemory;
 *     memRange.offset = allocInfo.offset;
 *     memRange.size   = allocInfo.size;
 *     vkFlushMappedMemoryRanges(device, 1, &memRange);
 * }
* *

Please note that memory allocated with {@link #VMA_MEMORY_USAGE_CPU_ONLY MEMORY_USAGE_CPU_ONLY} is guaranteed to be host coherent.

* *

Also, Windows drivers from all 3 PC GPU vendors (AMD, Intel, NVIDIA) currently provide {@code VK_MEMORY_PROPERTY_HOST_COHERENT_BIT} flag on all memory * types that are {@code VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT}, so on this platform you may not need to bother.

* *

Finding out if memory is mappable

* *

It may happen that your allocation ends up in memory that is {@code HOST_VISIBLE} (available for mapping) despite it wasn't explicitly requested. For * example, application may work on integrated graphics with unified memory (like Intel) or allocation from video memory might have failed, so the library * chose system memory as fallback.

* *

You can detect this case and map such allocation to access its memory on CPU directly, instead of launching a transfer operation. In order to do that: * inspect {@code allocInfo.memoryType}, call {@link #vmaGetMemoryTypeProperties GetMemoryTypeProperties}, and look for {@code VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT} flag in properties of * that memory type.

* *

 * VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
 * bufCreateInfo.size = sizeof(ConstantBuffer);
 * bufCreateInfo.usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
 * 
 * VmaAllocationCreateInfo allocCreateInfo = {};
 * allocCreateInfo.usage = VMA_MEMORY_USAGE_GPU_ONLY;
 * 
 * VkBuffer buf;
 * VmaAllocation alloc;
 * VmaAllocationInfo allocInfo;
 * vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, &allocInfo);
 * 
 * VkMemoryPropertyFlags memFlags;
 * vmaGetMemoryTypeProperties(allocator, allocInfo.memoryType, &memFlags);
 * if((memFlags & VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT) == 0)
 * {
 *     // Allocation ended up in mappable memory. You can map it and access it directly.
 *     void* mappedData;
 *     vmaMapMemory(allocator, alloc, &mappedData);
 *     memcpy(mappedData, &constantBufferData, sizeof(constantBufferData));
 *     vmaUnmapMemory(allocator, alloc);
 * }
 * else
 * {
 *     // Allocation ended up in non-mappable memory.
 *     // You need to create CPU-side buffer in VMA_MEMORY_USAGE_CPU_ONLY and make a transfer.
 * }
* *

You can even use {@link #VMA_ALLOCATION_CREATE_MAPPED_BIT ALLOCATION_CREATE_MAPPED_BIT} flag while creating allocations that are not necessarily {@code HOST_VISIBLE} (e.g. using * {@link #VMA_MEMORY_USAGE_GPU_ONLY MEMORY_USAGE_GPU_ONLY}). If the allocation ends up in memory type that is {@code HOST_VISIBL}E, it will be persistently mapped and you can use it * directly. If not, the flag is just ignored. Example:

* *

 * VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
 * bufCreateInfo.size = sizeof(ConstantBuffer);
 * bufCreateInfo.usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
 * 
 * VmaAllocationCreateInfo allocCreateInfo = {};
 * allocCreateInfo.usage = VMA_MEMORY_USAGE_GPU_ONLY;
 * allocCreateInfo.flags = VMA_ALLOCATION_CREATE_MAPPED_BIT;
 * 
 * VkBuffer buf;
 * VmaAllocation alloc;
 * VmaAllocationInfo allocInfo;
 * vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, &allocInfo);
 * 
 * if(allocInfo.pUserData != nullptr)
 * {
 *     // Allocation ended up in mappable memory.
 *     // It's persistently mapped. You can access it directly.
 *     memcpy(allocInfo.pMappedData, &constantBufferData, sizeof(constantBufferData));
 * }
 * else
 * {
 *     // Allocation ended up in non-mappable memory.
 *     // You need to create CPU-side buffer in VMA_MEMORY_USAGE_CPU_ONLY and make a transfer.
 * }
* *

Custom memory pools

* *

A memory pool contains a number of {@code VkDeviceMemory} blocks. The library automatically creates and manages default pool for each memory type * available on the device. Default memory pool automatically grows in size. Size of allocated blocks is also variable and managed automatically.

* *

You can create custom pool and allocate memory out of it. It can be useful if you want to:

* *
    *
  • Keep certain kind of allocations separate from others.
  • *
  • Enforce particular, fixed size of Vulkan memory blocks.
  • *
  • Limit maximum amount of Vulkan memory allocated for that pool.
  • *
  • Reserve minimum or fixed amount of Vulkan memory always preallocated for that pool.
  • *
* *

To use custom memory pools:

* *
    *
  1. Fill {@link VmaPoolCreateInfo} structure.
  2. *
  3. Call {@link #vmaCreatePool CreatePool} to obtain {@code VmaPool} handle.
  4. *
  5. When making an allocation, set {@link VmaAllocationCreateInfo}{@code ::pool} to this handle. You don't need to specify any other parameters of this * structure, like usage.
  6. *
* *

Example:

* *

 * // Create a pool that can have at most 2 blocks, 128 MiB each.
 * VmaPoolCreateInfo poolCreateInfo = {};
 * poolCreateInfo.memoryTypeIndex = ...
 * poolCreateInfo.blockSize = 128ull * 1024 * 1024;
 * poolCreateInfo.maxBlockCount = 2;
 * 
 * VmaPool pool;
 * vmaCreatePool(allocator, &poolCreateInfo, &pool);
 * 
 * // Allocate a buffer out of it.
 * VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
 * bufCreateInfo.size = 1024;
 * bufCreateInfo.usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
 * 
 * VmaAllocationCreateInfo allocCreateInfo = {};
 * allocCreateInfo.pool = pool;
 * 
 * VkBuffer buf;
 * VmaAllocation alloc;
 * VmaAllocationInfo allocInfo;
 * vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &buf, &alloc, &allocInfo);
* *

You have to free all allocations made from this pool before destroying it.

* *

 * vmaDestroyBuffer(allocator, buf, alloc);
 * vmaDestroyPool(allocator, pool);
* *

Choosing memory type index

* *

When creating a pool, you must explicitly specify memory type index. To find the one suitable for your buffers or images, you can use helper functions * {@link #vmaFindMemoryTypeIndexForBufferInfo FindMemoryTypeIndexForBufferInfo}, {@link #vmaFindMemoryTypeIndexForImageInfo FindMemoryTypeIndexForImageInfo}. You need to provide structures with example parameters of buffers or images * that you are going to create in that pool.

* *

 * VkBufferCreateInfo exampleBufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
 * exampleBufCreateInfo.size = 1024; // Whatever.
 * exampleBufCreateInfo.usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT; // Change if needed.
 * 
 * VmaAllocationCreateInfo allocCreateInfo = {};
 * allocCreateInfo.usage = VMA_MEMORY_USAGE_GPU_ONLY; // Change if needed.
 * 
 * uint32_t memTypeIndex;
 * vmaFindMemoryTypeIndexForBufferInfo(allocator, &exampleBufCreateInfo, &allocCreateInfo, &memTypeIndex);
 * 
 * VmaPoolCreateInfo poolCreateInfo = {};
 * poolCreateInfo.memoryTypeIndex = memTypeIndex;
 * // ...
* *

When creating buffers/images allocated in that pool, provide following parameters:

* *
    *
  • {@code VkBufferCreateInfo}: Prefer to pass same parameters as above. Otherwise you risk creating resources in a memory type that is not suitable * for them, which may result in undefined behavior. Using different {@code VK_BUFFER_USAGE_} flags may work, but you shouldn't create images in a * pool intended for buffers or the other way around.
  • *
  • {@link VmaAllocationCreateInfo}: You don't need to pass same parameters. Fill only {@code pool} member. Other members are ignored anyway.
  • *
* *

Defragmentation

* *

Interleaved allocations and deallocations of many objects of varying size can cause fragmentation, which can lead to a situation where the library is * unable to find a continuous range of free memory for a new allocation despite there is enough free space, just scattered across many small free ranges * between existing allocations.

* *

To mitigate this problem, you can use {@link #vmaDefragment Defragment}. Given set of allocations, this function can move them to compact used memory, ensure more * continuous free space and possibly also free some {@code VkDeviceMemory}. It can work only on allocations made from memory type that is * {@code HOST_VISIBLE}. Allocations are modified to point to the new {@code VkDeviceMemory} and offset. Data in this memory is also {@code memmove}-ed to * the new place. However, if you have images or buffers bound to these allocations (and you certainly do), you need to destroy, recreate, and bind them * to the new place in memory.

* *

For further details and example code, see documentation of function {@link #vmaDefragment Defragment}.

* *

Lost allocations

* *

If your game oversubscribes video memory, it may work OK in previous-generation graphics APIs (DirectX 9, 10, 11, OpenGL) because resources are * automatically paged to system RAM. In Vulkan you can't do it because when you run out of memory, an allocation just fails. If you have more data (e.g. * textures) than can fit into VRAM and you don't need it all at once, you may want to upload them to GPU on demand and "push out" ones that are not used * for a long time to make room for the new ones, effectively using VRAM (or a cartain memory pool) as a form of cache. Vulkan Memory Allocator can help * you with that by supporting a concept of "lost allocations".

* *

To create an allocation that can become lost, include {@link #VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT ALLOCATION_CREATE_CAN_BECOME_LOST_BIT} flag in {@link VmaAllocationCreateInfo}{@code ::flags}. Before * using a buffer or image bound to such allocation in every new frame, you need to query it if it's not lost. To check it, call {@link #vmaTouchAllocation TouchAllocation}. If * the allocation is lost, you should not use it or buffer/image bound to it. You mustn't forget to destroy this allocation and this buffer/image. * {@link #vmaGetAllocationInfo GetAllocationInfo} can also be used for checking status of the allocation. Allocation is lost when returned {@link VmaAllocationInfo}{@code ::deviceMemory} * == {@code VK_NULL_HANDLE}.

* *

To create an allocation that can make some other allocations lost to make room for it, use {@link #VMA_ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT} flag. You will * usually use both flags {@link #VMA_ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT} and {@link #VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT ALLOCATION_CREATE_CAN_BECOME_LOST_BIT} at the same time.

* *

Warning! Current implementation uses quite naive, brute force algorithm, which can make allocation calls that use * {@link #VMA_ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT} flag quite slow. A new, more optimal algorithm and data structure to speed this up is planned for the * future.

* *

Q: When interleaving creation of new allocations with usage of existing ones, how do you make sure that an allocation won't become lost while it's used * in the current frame?

* *

It is ensured because {@link #vmaTouchAllocation TouchAllocation} / {@link #vmaGetAllocationInfo GetAllocationInfo} not only returns allocation status/parameters and checks whether it's not lost, but * when it's not, it also atomically marks it as used in the current frame, which makes it impossible to become lost in that frame. It uses lockless * algorithm, so it works fast and doesn't involve locking any internal mutex.

* *

Q: What if my allocation may still be in use by the GPU when it's rendering a previous frame while I already submit new frame on the CPU?

* *

You can make sure that allocations "touched" by {@link #vmaTouchAllocation TouchAllocation} / {@link #vmaGetAllocationInfo GetAllocationInfo} will not become lost for a number of additional frames back * from the current one by specifying this number as {@link VmaAllocatorCreateInfo}{@code ::frameInUseCount} (for default memory pool) and * {@link VmaPoolCreateInfo}{@code ::frameInUseCount} (for custom pool).

* *

Q: How do you inform the library when new frame starts?

* *

You need to call function {@link #vmaSetCurrentFrameIndex SetCurrentFrameIndex}.

* *

Example code:

* *

 * struct MyBuffer
 * {
 *     VkBuffer m_Buf = nullptr;
 *     VmaAllocation m_Alloc = nullptr;
 * 
 *     // Called when the buffer is really needed in the current frame.
 *     void EnsureBuffer();
 * };
 * 
 * void MyBuffer::EnsureBuffer()
 * {
 *     // Buffer has been created.
 *     if(m_Buf != VK_NULL_HANDLE)
 *     {
 *         // Check if its allocation is not lost + mark it as used in current frame.
 *         if(vmaTouchAllocation(allocator, m_Alloc))
 *         {
 *             // It's all OK - safe to use m_Buf.
 *             return;
 *         }
 *     }
 * 
 *     // Buffer not yet exists or lost - destroy and recreate it.
 * 
 *     vmaDestroyBuffer(allocator, m_Buf, m_Alloc);
 * 
 *     VkBufferCreateInfo bufCreateInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
 *     bufCreateInfo.size = 1024;
 *     bufCreateInfo.usage = VK_BUFFER_USAGE_UNIFORM_BUFFER_BIT | VK_BUFFER_USAGE_TRANSFER_DST_BIT;
 * 
 *     VmaAllocationCreateInfo allocCreateInfo = {};
 *     allocCreateInfo.usage = VMA_MEMORY_USAGE_GPU_ONLY;
 *     allocCreateInfo.flags = VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT |
 *         VMA_ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT;
 * 
 *     vmaCreateBuffer(allocator, &bufCreateInfo, &allocCreateInfo, &m_Buf, &m_Alloc, nullptr);
 * }
* *

When using lost allocations, you may see some Vulkan validation layer warnings about overlapping regions of memory bound to different kinds of buffers * and images. This is still valid as long as you implement proper handling of lost allocations (like in the example above) and don't use them.

* *

You can create an allocation that is already in lost state from the beginning using function {@link #vmaCreateLostAllocation CreateLostAllocation}. It may be useful if you need a * "dummy" allocation that is not null.

* *

You can call function {@link #vmaMakePoolAllocationsLost MakePoolAllocationsLost} to set all eligible allocations in a specified custom pool to lost state. Allocations that have been * "touched" in current frame or {@link VmaPoolCreateInfo}{@code ::frameInUseCount} frames back cannot become lost.

* *

Statistics

* *

This library contains functions that return information about its internal state, especially the amount of memory allocated from Vulkan. Please keep in * mind that these functions need to traverse all internal data structures to gather these information, so they may be quite time-consuming. Don't call * them too often.

* *

Numeric statistics

* *

You can query for overall statistics of the allocator using function {@link #vmaCalculateStats CalculateStats}. Information are returned using structure {@link VmaStats}. It * contains {@link VmaStatInfo} - number of allocated blocks, number of allocations (occupied ranges in these blocks), number of unused (free) ranges in these * blocks, number of bytes used and unused (but still allocated from Vulkan) and other information. They are summed across memory heaps, memory types and * total for whole allocator.

* *

You can query for statistics of a custom pool using function {@link #vmaGetPoolStats GetPoolStats}. Information are returned using structure {@link VmaPoolStats}.

* *

You can query for information about specific allocation using function {@link #vmaGetAllocationInfo GetAllocationInfo}. It fill structure {@link VmaAllocationInfo}.

* *

JSON dump

* *

You can dump internal state of the allocator to a string in JSON format using function {@link #vmaBuildStatsString BuildStatsString}. The result is guaranteed to be correct * JSON. It uses ANSI encoding. Any strings provided by user are copied as-is and properly escaped for JSON, so if they use UTF-8, ISO-8859-2 or any other * encoding, this JSON string can be treated as using this encoding. It must be freed using function {@link #vmaFreeStatsString FreeStatsString}.

* *

The format of this JSON string is not part of official documentation of the library, but it will not change in backward-incompatible way without * increasing library major version number and appropriate mention in changelog.

* *

The JSON string contains all the data that can be obtained using {@link #vmaCalculateStats CalculateStats}. It can also contain detailed map of allocated memory blocks and * their regions - free and occupied by allocations. This allows e.g. to visualize the memory or assess fragmentation.

* *

Allocation names and user data

* *

Allocation user data

* *

You can annotate allocations with your own information, e.g. for debugging purposes. To do that, fill {@link VmaAllocationCreateInfo}{@code ::pUserData} * field when creating an allocation. It's an opaque {@code void*} pointer. You can use it e.g. as a pointer, some handle, index, key, ordinal number or * any other value that would associate the allocation with your custom metadata.

* *

 * VkBufferCreateInfo bufferInfo = { VK_STRUCTURE_TYPE_BUFFER_CREATE_INFO };
 * // Fill bufferInfo...
 * 
 * MyBufferMetadata* pMetadata = CreateBufferMetadata();
 * 
 * VmaAllocationCreateInfo allocCreateInfo = {};
 * allocCreateInfo.usage = VMA_MEMORY_USAGE_GPU_ONLY;
 * allocCreateInfo.pUserData = pMetadata;
 * 
 * VkBuffer buffer;
 * VmaAllocation allocation;
 * vmaCreateBuffer(allocator, &bufferInfo, &allocCreateInfo, &buffer, &allocation, nullptr);
* *

The pointer may be later retrieved as {@link VmaAllocationInfo}{@code ::pUserData}:

* *

 * VmaAllocationInfo allocInfo;
 * vmaGetAllocationInfo(allocator, allocation, &allocInfo);
 * MyBufferMetadata* pMetadata = (MyBufferMetadata*)allocInfo.pUserData;
* *

It can also be changed using function {@link #vmaSetAllocationUserData SetAllocationUserData}.

* *

Values of (non-zero) allocations' {@code pUserData} are printed in JSON report created by {@link #vmaBuildStatsString BuildStatsString}, in hexadecimal form.

* *

Allocation names

* *

There is alternative mode available where {@code pUserData} pointer is used to point to a null-terminated string, giving a name to the allocation. To * use this mode, set {@link #VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT} flag in {@link VmaAllocationCreateInfo}{@code ::flags}. Then {@code pUserData} passed as * {@link VmaAllocationCreateInfo}{@code ::pUserData} or argument to {@link #vmaSetAllocationUserData SetAllocationUserData} must be either null or pointer to a null-terminated string. The * library creates internal copy of the string, so the pointer you pass doesn't need to be valid for whole lifetime of the allocation. You can free it * after the call.

* *

 * VkImageCreateInfo imageInfo = { VK_STRUCTURE_TYPE_IMAGE_CREATE_INFO };
 * // Fill imageInfo...
 * 
 * std::string imageName = "Texture: ";
 * imageName += fileName;
 * 
 * VmaAllocationCreateInfo allocCreateInfo = {};
 * allocCreateInfo.usage = VMA_MEMORY_USAGE_GPU_ONLY;
 * allocCreateInfo.flags = VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT;
 * allocCreateInfo.pUserData = imageName.c_str();
 * 
 * VkImage image;
 * VmaAllocation allocation;
 * vmaCreateImage(allocator, &imageInfo, &allocCreateInfo, &image, &allocation, nullptr);
* *

The value of {@code pUserData} pointer of the allocation will be different than the one you passed when setting allocation's name - pointing to a * buffer managed internally that holds copy of the string.

* *

 * VmaAllocationInfo allocInfo;
 * vmaGetAllocationInfo(allocator, allocation, &allocInfo);
 * const char* imageName = (const char*)allocInfo.pUserData;
 * printf("Image name: %s\n", imageName);
* *

That string is also printed in JSON report created by {@link #vmaBuildStatsString BuildStatsString}.

* *

Recommended usage patterns

* *

Simple patterns

* *
Render targets
* *

When: Any resources that you frequently write and read on GPU, e.g. images used as color attachments (aka "render targets"), depth-stencil * attachments, images/buffers used as storage image/buffer (aka "Unordered Access View (UAV)").

* *

What to do: Create them in video memory that is fastest to access from GPU using {@link #VMA_MEMORY_USAGE_GPU_ONLY MEMORY_USAGE_GPU_ONLY}.

* *

Consider using {@code VK_KHR_dedicated_allocation} extension and/or manually creating them as dedicated allocations using * {@link #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT ALLOCATION_CREATE_DEDICATED_MEMORY_BIT}, especially if they are large or if you plan to destroy and recreate them e.g. when display resolution changes. * Prefer to create such resources first and all other GPU resources (like textures and vertex buffers) later.

* *
Immutable resources
* *

When: Any resources that you fill on CPU only once (aka "immutable") or infrequently and then read frequently on GPU, e.g. textures, vertex and * index buffers, constant buffers that don't change often.

* *

What to do: Create them in video memory that is fastest to access from GPU using {@link #VMA_MEMORY_USAGE_GPU_ONLY MEMORY_USAGE_GPU_ONLY}.

* *

To initialize content of such resource, create a CPU-side (aka "staging") copy of it in system memory - {@link #VMA_MEMORY_USAGE_CPU_ONLY MEMORY_USAGE_CPU_ONLY}, map it, fill it, and * submit a transfer from it to the GPU resource. You can keep the staging copy if you need it for another upload transfer in the future. If you don't, * you can destroy it or reuse this buffer for uploading different resource after the transfer finishes.

* *

Prefer to create just buffers in system memory rather than images, even for uploading textures. Use {@code vkCmdCopyBufferToImage()}. Dont use images * with {@code VK_IMAGE_TILING_LINEA}R.

* *
Dynamic resources
* *

When: Any resources that change frequently (aka "dynamic"), e.g. every frame or every draw call, written on CPU, read on GPU.

* *

What to do: Create them using {@link #VMA_MEMORY_USAGE_CPU_TO_GPU MEMORY_USAGE_CPU_TO_GPU}. You can map it and write to it directly on CPU, as well as read from it on GPU.

* *

This is a more complex situation. Different solutions are possible, and the best one depends on specific GPU type, but you can use this simple approach * for the start. Prefer to write to such resource sequentially (e.g. using {@code memcpy}). Don't perform random access or any reads from it, as it may * be very slow.

* *
Readback
* *

When: Resources that contain data written by GPU that you want to read back on CPU, e.g. results of some computations.

* *

What to do: Create them using {@link #VMA_MEMORY_USAGE_GPU_TO_CPU MEMORY_USAGE_GPU_TO_CPU}. You can write to them directly on GPU, as well as map and read them on CPU.

* *

Advanced patterns

* *
Detecting integrated graphics
* *

You can support integrated graphics (like Intel HD Graphics, AMD APU) better by detecting it in Vulkan. To do it, call * {@code vkGetPhysicalDeviceProperties()}, inspect {@code VkPhysicalDeviceProperties::deviceType} and look for * {@code VK_PHYSICAL_DEVICE_TYPE_INTEGRATED_GPU}. When you find it, you can assume that memory is unified and all memory types are equally fast to access * from GPU, regardless of {@code VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT}.

* *

You can then sum up sizes of all available memory heaps and treat them as useful for your GPU resources, instead of only {@code DEVICE_LOCAL} ones. You * can also prefer to create your resources in memory types that are {@code HOST_VISIBLE} to map them directly instead of submitting explicit transfer * (see below).

* *
Direct access versus transfer
* *

For resources that you frequently write on CPU and read on GPU, many solutions are possible:

* *
    *
  1. Create one copy in video memory using {@link #VMA_MEMORY_USAGE_GPU_ONLY MEMORY_USAGE_GPU_ONLY}, second copy in system memory using {@link #VMA_MEMORY_USAGE_CPU_ONLY MEMORY_USAGE_CPU_ONLY} and submit explicit tranfer * each time.
  2. *
  3. Create just single copy using {@link #VMA_MEMORY_USAGE_CPU_TO_GPU MEMORY_USAGE_CPU_TO_GPU}, map it and fill it on CPU, read it directly on GPU.
  4. *
  5. Create just single copy using {@link #VMA_MEMORY_USAGE_CPU_ONLY MEMORY_USAGE_CPU_ONLY}, map it and fill it on CPU, read it directly on GPU.
  6. *
* *

Which solution is the most efficient depends on your resource and especially on the GPU. It is best to measure it and then make the decision. Some * general recommendations:

* *
    *
  • On integrated graphics use (2) or (3) to avoid unnecesary time and memory overhead related to using a second copy.
  • *
  • For small resources (e.g. constant buffers) use (2). Discrete AMD cards have special 256 MiB pool of video memory that is directly mappable. Even * if the resource ends up in system memory, its data may be cached on GPU after first fetch over PCIe bus.
  • *
  • For larger resources (e.g. textures), decide between (1) and (2). You may want to differentiate NVIDIA and AMD, e.g. by looking for memory type * that is both {@code DEVICE_LOCAL} and {@code HOST_VISIBLE}. When you find it, use (2), otherwise use (1).
  • *
* *

Similarly, for resources that you frequently write on GPU and read on CPU, multiple solutions are possible:

* *
    *
  1. Create one copy in video memory using {@link #VMA_MEMORY_USAGE_GPU_ONLY MEMORY_USAGE_GPU_ONLY}, second copy in system memory using {@link #VMA_MEMORY_USAGE_GPU_TO_CPU MEMORY_USAGE_GPU_TO_CPU} and submit explicit * transfer each time.
  2. *
  3. Create just single copy using {@link #VMA_MEMORY_USAGE_GPU_TO_CPU MEMORY_USAGE_GPU_TO_CPU}, write to it directly on GPU, map it and read it on CPU.
  4. *
* *

You should take some measurements to decide which option is faster in case of your specific resource.

* *

If you don't want to specialize your code for specific types of GPUs, you can still make an simple optimization for cases when your resource ends up in * mappable memory to use it directly in this case instead of creating CPU-side staging copy. For details see Finding out if memory is mappable.

* *

Configuration

* *

Custom host memory allocator

* *

If you use custom allocator for CPU memory rather than default operator {@code new} and {@code delete} from C++, you can make this library using your * allocator as well by filling optional member {@link VmaAllocatorCreateInfo}{@code ::pAllocationCallbacks}. These functions will be passed to Vulkan, as well * as used by the library itself to make any CPU-side allocations.

* *

Device memory allocation callbacks

* *

The library makes calls to {@code vkAllocateMemory()} and {@code vkFreeMemory()} internally. You can setup callbacks to be informed about these calls, * e.g. for the purpose of gathering some statistics. To do it, fill optional member {@link VmaAllocatorCreateInfo}{@code ::pDeviceMemoryCallbacks}.

* *

Device heap memory limit

* *

If you want to test how your program behaves with limited amount of Vulkan device memory available without switching your graphics card to one that * really has smaller VRAM, you can use a feature of this library intended for this purpose. To do it, fill optional member * {@link VmaAllocatorCreateInfo}{@code ::pHeapSizeLimit}.

* *

VK_KHR_dedicated_allocation

* *

{@code VK_KHR_dedicated_allocation} is a Vulkan extension which can be used to improve performance on some GPUs. It augments Vulkan API with * possibility to query driver whether it prefers particular buffer or image to have its own, dedicated allocation (separate {@code VkDeviceMemory} block) * for better efficiency - to be able to do some internal optimizations.

* *

The extension is supported by this library. It will be used automatically when enabled. To enable it:

* *

1 . When creating Vulkan device, check if following 2 device extensions are supported (call {@code vkEnumerateDeviceExtensionProperties()}). If yes, * enable them (fill {@link VkDeviceCreateInfo}{@code ::ppEnabledExtensionNames}).

* *
    *
  • {@code VK_KHR_get_memory_requirements2}
  • *
  • {@code VK_KHR_dedicated_allocation}
  • *
* *

If you enabled these extensions:

* *

2 . Use {@link #VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT} flag when creating your {@code VmaAllocator} to inform the library that you enabled required * extensions and you want the library to use them.

* *

 * allocatorInfo.flags |= VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT;
 * 
 * vmaCreateAllocator(&allocatorInfo, &allocator);
* *

That's all. The extension will be automatically used whenever you create a buffer using {@link #vmaCreateBuffer CreateBuffer} or image using {@link #vmaCreateImage CreateImage}.

* *

When using the extension together with Vulkan Validation Layer, you will receive warnings like this:

* *

 * vkBindBufferMemory(): Binding memory to buffer 0x33 but vkGetBufferMemoryRequirements() has not been called on that buffer.
* *

It is OK, you should just ignore it. It happens because you use function {@code vkGetBufferMemoryRequirements2KHR()} instead of standard * {@code vkGetBufferMemoryRequirements()}, while the validation layer seems to be unaware of it.

* *

To learn more about this extension, see:

* * * *

General considerations

* *

Thread safety

* *
    *
  • The library has no global state, so separate {@code VmaAllocator} objects can be used independently. There should be no need to create multiple * such objects though - one per {@code VkDevice} is enough.
  • *
  • By default, all calls to functions that take {@code VmaAllocator} as first parameter are safe to call from multiple threads simultaneously because * they are synchronized internally when needed.
  • *
  • When the allocator is created with {@link #VMA_ALLOCATOR_CREATE_EXTERNALLY_SYNCHRONIZED_BIT ALLOCATOR_CREATE_EXTERNALLY_SYNCHRONIZED_BIT} flag, calls to functions that take such {@code VmaAllocator} * object must be synchronized externally.
  • *
  • Access to a {@code VmaAllocation} object must be externally synchronized. For example, you must not call {@link #vmaGetAllocationInfo GetAllocationInfo} and {@link #vmaMapMemory MapMemory} from * different threads at the same time if you pass the same {@code VmaAllocation} object to these functions.
  • *
* *

Allocation algorithm

* *

The library uses following algorithm for allocation, in order:

* *
    *
  1. Try to find free range of memory in existing blocks.
  2. *
  3. If failed, try to create a new block of {@code VkDeviceMemor}y, with preferred block size.
  4. *
  5. If failed, try to create such block with size/2, size/4, size/8.
  6. *
  7. If failed and {@link #VMA_ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT} flag was specified, try to find space in existing blocks, possilby making some other * allocations lost.
  8. *
  9. If failed, try to allocate separate {@code VkDeviceMemory} for this allocation, just like when you use {@link #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT ALLOCATION_CREATE_DEDICATED_MEMORY_BIT}.
  10. *
  11. If failed, choose other memory type that meets the requirements specified in {@link VmaAllocationCreateInfo} and go to point 1.
  12. *
  13. If failed, return {@code VK_ERROR_OUT_OF_DEVICE_MEMORY}.
  14. *
* *

Features not supported

* *

Features deliberately excluded from the scope of this library:

* *
    *
  • Data transfer - issuing commands that transfer data between buffers or images, any usage of {@code VkCommandList} or {@code VkCommandQueue} and * related synchronization is responsibility of the user.
  • *
  • Support for any programming languages other than C/C++. Bindings to other languages are welcomed as external projects.
  • *
*/ public class Vma { /** * Flags for created {@code VmaAllocator}. ({@code VmaAllocatorCreateFlagBits}) * *
Enum values:
* *
    *
  • {@link #VMA_ALLOCATOR_CREATE_EXTERNALLY_SYNCHRONIZED_BIT ALLOCATOR_CREATE_EXTERNALLY_SYNCHRONIZED_BIT} - * Allocator and all objects created from it will not be synchronized internally, so you must guarantee they are used from only one thread at a time * or synchronized externally by you. * *

    Using this flag may increase performance because internal mutexes are not used.

    *
  • *
  • {@link #VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT} - * Enables usage of {@code VK_KHR_dedicated_allocation} extension. * *

    Using this extenion will automatically allocate dedicated blocks of memory for some buffers and images instead of suballocating place for them out * of bigger memory blocks (as if you explicitly used {@link #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT ALLOCATION_CREATE_DEDICATED_MEMORY_BIT} flag) when it is recommended by the driver. It may * improve performance on some GPUs.

    * *

    You may set this flag only if you found out that following device extensions are supported, you enabled them while creating Vulkan device passed as * {@link VmaAllocatorCreateInfo}{@code ::device}, and you want them to be used internally by this library:

    * *
      *
    • {@code VK_KHR_get_memory_requirements2}
    • *
    • {@code VK_KHR_dedicated_allocation}
    • *
    * *

    When this flag is set, you can experience following warnings reported by Vulkan validation layer. You can ignore them.

    * *
    
         * > vkBindBufferMemory(): Binding memory to buffer 0x2d but vkGetBufferMemoryRequirements() has not been called on that buffer.
    *
  • *
*/ public static final int VMA_ALLOCATOR_CREATE_EXTERNALLY_SYNCHRONIZED_BIT = 0x1, VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT = 0x2; /** * {@code VmaMemoryUsage} * *
Enum values:
* *
    *
  • {@link #VMA_MEMORY_USAGE_UNKNOWN MEMORY_USAGE_UNKNOWN} - * No intended memory usage specified. * *

    Use other members of {@link VmaAllocationCreateInfo} to specify your requirements.

    *
  • *
  • {@link #VMA_MEMORY_USAGE_GPU_ONLY MEMORY_USAGE_GPU_ONLY} - * Memory will be used on device only, so fast access from the device is preferred. * *

    It usually means device-local GPU (video) memory. No need to be mappable on host. It is roughly equivalent of {@code D3D12_HEAP_TYPE_DEFAULT}.

    * *

    Usage:

    * *
      *
    • Resources written and read by device, e.g. images used as attachments.
    • *
    • Resources transferred from host once (immutable) or infrequently and read by device multiple times, e.g. textures to be sampled, vertex * buffers, uniform (constant) buffers, and majority of other types of resources used by device.
    • *
    * *

    Allocation may still end up in {@code HOST_VISIBLE} memory on some implementations. In such case, you are free to map it. You can use * {@link #VMA_ALLOCATION_CREATE_MAPPED_BIT ALLOCATION_CREATE_MAPPED_BIT} with this usage type.

    *
  • *
  • {@link #VMA_MEMORY_USAGE_CPU_ONLY MEMORY_USAGE_CPU_ONLY} - * Memory will be mappable on host. * *

    It usually means CPU (system) memory. Resources created in this pool may still be accessible to the device, but access to them can be slower. * Guarantees to be {@code HOST_VISIBLE} and {@code HOST_COHERENT}. CPU read may be uncached. It is roughly equivalent of * {@code D3D12_HEAP_TYPE_UPLOAD}.

    * *

    Usage: Staging copy of resources used as transfer source.

    *
  • *
  • {@link #VMA_MEMORY_USAGE_CPU_TO_GPU MEMORY_USAGE_CPU_TO_GPU} - * Memory that is both mappable on host (guarantees to be {@code HOST_VISIBLE}) and preferably fast to access by GPU. * *

    CPU reads may be uncached and very slow.

    * *

    Usage: Resources written frequently by host (dynamic), read by device. E.g. textures, vertex buffers, uniform buffers updated every frame or every * draw call.

    *
  • *
  • {@link #VMA_MEMORY_USAGE_GPU_TO_CPU MEMORY_USAGE_GPU_TO_CPU} - * Memory mappable on host (guarantees to be {@code HOST_VISIBLE}) and cached. * *

    It is roughly equivalent of {@code D3D12_HEAP_TYPE_READBACK}.

    * *

    Usage:

    * *
      *
    • Resources written by device, read by host - results of some computations, e.g. screen capture, average scene luminance for HDR tone mapping.
    • *
    • Any resources read or accessed randomly on host, e.g. CPU-side copy of vertex buffer used as source of transfer, but also used for collision * detection.
    • *
    *
  • *
*/ public static final int VMA_MEMORY_USAGE_UNKNOWN = 0, VMA_MEMORY_USAGE_GPU_ONLY = 2, VMA_MEMORY_USAGE_CPU_ONLY = 1, VMA_MEMORY_USAGE_CPU_TO_GPU = 3, VMA_MEMORY_USAGE_GPU_TO_CPU = 4; /** * Flags to be passed as {@link VmaAllocationCreateInfo}{@code ::flags}. ({@code VmaAllocationCreateFlagBits}) * *
Enum values:
* *
    *
  • {@link #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT ALLOCATION_CREATE_DEDICATED_MEMORY_BIT} - * Set this flag if the allocation should have its own memory block. * *

    Use it for special, big resources, like fullscreen images used as attachments.

    * *

    This flag must also be used for host visible resources that you want to map simultaneously because otherwise they might end up as regions of the * same {@code VkDeviceMemory}, while mapping same {@code VkDeviceMemory} multiple times simultaneously is illegal.

    * *

    You should not use this flag if {@link VmaAllocationCreateInfo}{@code ::pool} is not null.

    *
  • *
  • {@link #VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT ALLOCATION_CREATE_NEVER_ALLOCATE_BIT} - * Set this flag to only try to allocate from existing {@code VkDeviceMemory} blocks and never create new such block. * *

    If new allocation cannot be placed in any of the existing blocks, allocation fails with {@code VK_ERROR_OUT_OF_DEVICE_MEMORY} error.

    * *

    You should not use {@link #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT ALLOCATION_CREATE_DEDICATED_MEMORY_BIT} and {@link #VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT ALLOCATION_CREATE_NEVER_ALLOCATE_BIT} at the same time. It makes no sense.

    * *

    If {@link VmaAllocationCreateInfo}{@code ::pool} is not null, this flag is implied and ignored.

    *
  • *
  • {@link #VMA_ALLOCATION_CREATE_MAPPED_BIT ALLOCATION_CREATE_MAPPED_BIT} - * Set this flag to use a memory that will be persistently mapped and retrieve pointer to it. * *

    Pointer to mapped memory will be returned through {@link VmaAllocationInfo}{@code ::pMappedData}.

    * *

    Is it valid to use this flag for allocation made from memory type that is not {@code HOST_VISIBLE}. This flag is then ignored and memory is not * mapped. This is useful if you need an allocation that is efficient to use on GPU ({@code DEVICE_LOCAL}) and still want to map it directly if * possible on platforms that support it (e.g. Intel GPU).

    * *

    You should not use this flag together with {@link #VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT ALLOCATION_CREATE_CAN_BECOME_LOST_BIT}.

    *
  • *
  • {@link #VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT ALLOCATION_CREATE_CAN_BECOME_LOST_BIT} - * Allocation created with this flag can become lost as a result of another allocation with {@link #VMA_ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT} flag, so you * must check it before use. * *

    To check if allocation is not lost, call {@link #vmaGetAllocationInfo GetAllocationInfo} and check if {@link VmaAllocationInfo}{@code ::deviceMemory} is not {@code VK_NULL_HANDLE}.

    * *

    For details about supporting lost allocations, see Lost Allocations chapter of User Guide on Main Page.

    * *

    You should not use this flag together with {@link #VMA_ALLOCATION_CREATE_MAPPED_BIT ALLOCATION_CREATE_MAPPED_BIT}.

    *
  • *
  • {@link #VMA_ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT} - * While creating allocation using this flag, other allocations that were created with flag {@link #VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT ALLOCATION_CREATE_CAN_BECOME_LOST_BIT} can become lost. * *

    For details about supporting lost allocations, see Lost Allocations chapter of User Guide on Main Page.

    *
  • *
  • {@link #VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT} - * Set this flag to treat {@link VmaAllocationCreateInfo}{@code ::pUserData} as pointer to a null-terminated string. Instead of copying pointer value, a * local copy of the string is made and stored in allocation's {@code pUserData}. The string is automatically freed together with the allocation. It * is also used in {@link #vmaBuildStatsString BuildStatsString}. *
  • *
*/ public static final int VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT = 0x1, VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT = 0x2, VMA_ALLOCATION_CREATE_MAPPED_BIT = 0x4, VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT = 0x8, VMA_ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT = 0x10, VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT = 0x20; /** * Flags to be passed as {@link VmaPoolCreateInfo}{@code ::flags}. ({@code VmaPoolCreateFlagBits}) * *
Enum values:
* *
    *
  • {@link #VMA_POOL_CREATE_IGNORE_BUFFER_IMAGE_GRANULARITY_BIT POOL_CREATE_IGNORE_BUFFER_IMAGE_GRANULARITY_BIT} - * Use this flag if you always allocate only buffers and linear images or only optimal images out of this pool and so Buffer-Image Granularity can be * ignored. * *

    This is no optional optimization flag.

    * *

    If you always allocate using {@link #vmaCreateBuffer CreateBuffer}, {@link #vmaCreateImage CreateImage}, {@link #vmaAllocateMemoryForBuffer AllocateMemoryForBuffer}, then you don't need to use it because allocator knows * exact type of your allocations so it can handle Buffer-Image Granularity in the optimal way.

    * *

    If you also allocate using {@link #vmaAllocateMemoryForImage AllocateMemoryForImage} or {@link #vmaAllocateMemory AllocateMemory}, exact type of such allocations is not known, so allocator must be * conservative in handling Buffer-Image Granularity, which can lead to suboptimal allocation (wasted memory). In that case, if you can make sure you * always allocate only buffers and linear images or only optimal images out of this pool, use this flag to make allocator disregard Buffer-Image * Granularity and so make allocations more optimal.

    *
  • *
*/ public static final int VMA_POOL_CREATE_IGNORE_BUFFER_IMAGE_GRANULARITY_BIT = 0x2; static { Library.initialize(); } protected Vma() { throw new UnsupportedOperationException(); } // --- [ vmaCreateAllocator ] --- /** Unsafe version of: {@link #vmaCreateAllocator CreateAllocator} */ public static native int nvmaCreateAllocator(long pCreateInfo, long pAllocator); /** * Creates Allocator object. * *

LWJGL: Use {@link VmaVulkanFunctions}{@code ::set(VkInstance, VkDevice)} to populate the {@link VmaAllocatorCreateInfo}{@code ::pVulkanFunctions} struct.

*/ @NativeType("VkResult") public static int vmaCreateAllocator(@NativeType("VmaAllocatorCreateInfo const *") VmaAllocatorCreateInfo pCreateInfo, @NativeType("VmaAllocator *") PointerBuffer pAllocator) { if (CHECKS) { check(pAllocator, 1); VmaAllocatorCreateInfo.validate(pCreateInfo.address()); } return nvmaCreateAllocator(pCreateInfo.address(), memAddress(pAllocator)); } // --- [ vmaDestroyAllocator ] --- /** Unsafe version of: {@link #vmaDestroyAllocator DestroyAllocator} */ public static native void nvmaDestroyAllocator(long allocator); /** Destroys allocator object. */ public static void vmaDestroyAllocator(@NativeType("VmaAllocator") long allocator) { if (CHECKS) { check(allocator); } nvmaDestroyAllocator(allocator); } // --- [ vmaGetPhysicalDeviceProperties ] --- /** Unsafe version of: {@link #vmaGetPhysicalDeviceProperties GetPhysicalDeviceProperties} */ public static native void nvmaGetPhysicalDeviceProperties(long allocator, long ppPhysicalDeviceProperties); /** * {@code PhysicalDeviceProperties} are fetched from {@code physicalDevice} by the allocator. You can access it here, without fetching it again on your * own. */ public static void vmaGetPhysicalDeviceProperties(@NativeType("VmaAllocator") long allocator, @NativeType("VkPhysicalDeviceProperties const **") PointerBuffer ppPhysicalDeviceProperties) { if (CHECKS) { check(allocator); check(ppPhysicalDeviceProperties, 1); } nvmaGetPhysicalDeviceProperties(allocator, memAddress(ppPhysicalDeviceProperties)); } // --- [ vmaGetMemoryProperties ] --- /** Unsafe version of: {@link #vmaGetMemoryProperties GetMemoryProperties} */ public static native void nvmaGetMemoryProperties(long allocator, long ppPhysicalDeviceMemoryProperties); /** * {@code PhysicalDeviceMemoryProperties} are fetched from {@code physicalDevice} by the allocator. You can access it here, without fetching it again on * your own. */ public static void vmaGetMemoryProperties(@NativeType("VmaAllocator") long allocator, @NativeType("VkPhysicalDeviceMemoryProperties const **") PointerBuffer ppPhysicalDeviceMemoryProperties) { if (CHECKS) { check(allocator); check(ppPhysicalDeviceMemoryProperties, 1); } nvmaGetMemoryProperties(allocator, memAddress(ppPhysicalDeviceMemoryProperties)); } // --- [ vmaGetMemoryTypeProperties ] --- /** Unsafe version of: {@link #vmaGetMemoryTypeProperties GetMemoryTypeProperties} */ public static native void nvmaGetMemoryTypeProperties(long allocator, int memoryTypeIndex, long pFlags); /** * Given Memory Type Index, returns Property Flags of this memory type. * *

This is just a convenience function. Same information can be obtained using {@link #vmaGetMemoryProperties GetMemoryProperties}.

*/ public static void vmaGetMemoryTypeProperties(@NativeType("VmaAllocator") long allocator, @NativeType("uint32_t") int memoryTypeIndex, @NativeType("VkMemoryPropertyFlags *") IntBuffer pFlags) { if (CHECKS) { check(allocator); check(pFlags, 1); } nvmaGetMemoryTypeProperties(allocator, memoryTypeIndex, memAddress(pFlags)); } // --- [ vmaSetCurrentFrameIndex ] --- /** Unsafe version of: {@link #vmaSetCurrentFrameIndex SetCurrentFrameIndex} */ public static native void nvmaSetCurrentFrameIndex(long allocator, int frameIndex); /** * Sets index of the current frame. * *

This function must be used if you make allocations with {@link #VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT ALLOCATION_CREATE_CAN_BECOME_LOST_BIT} and {@link #VMA_ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT ALLOCATION_CREATE_CAN_MAKE_OTHER_LOST_BIT} flags to * inform the allocator when a new frame begins. Allocations queried using {@link #vmaGetAllocationInfo GetAllocationInfo} cannot become lost in the current frame.

*/ public static void vmaSetCurrentFrameIndex(@NativeType("VmaAllocator") long allocator, @NativeType("uint32_t") int frameIndex) { if (CHECKS) { check(allocator); } nvmaSetCurrentFrameIndex(allocator, frameIndex); } // --- [ vmaCalculateStats ] --- /** Unsafe version of: {@link #vmaCalculateStats CalculateStats} */ public static native void nvmaCalculateStats(long allocator, long pStats); /** Retrieves statistics from current state of the Allocator. */ public static void vmaCalculateStats(@NativeType("VmaAllocator") long allocator, @NativeType("VmaStats *") VmaStats pStats) { if (CHECKS) { check(allocator); } nvmaCalculateStats(allocator, pStats.address()); } // --- [ vmaBuildStatsString ] --- /** Unsafe version of: {@link #vmaBuildStatsString BuildStatsString} */ public static native void nvmaBuildStatsString(long allocator, long ppStatsString, int detailedMap); /** * Builds and returns statistics as string in JSON format. * * @param ppStatsString must be freed using {@link #vmaFreeStatsString FreeStatsString} function */ public static void vmaBuildStatsString(@NativeType("VmaAllocator") long allocator, @NativeType("char **") PointerBuffer ppStatsString, @NativeType("VkBool32") boolean detailedMap) { if (CHECKS) { check(allocator); check(ppStatsString, 1); } nvmaBuildStatsString(allocator, memAddress(ppStatsString), detailedMap ? 1 : 0); } // --- [ vmaFreeStatsString ] --- public static native void nvmaFreeStatsString(long allocator, long pStatsString); public static void vmaFreeStatsString(@NativeType("VmaAllocator") long allocator, @NativeType("char *") ByteBuffer pStatsString) { if (CHECKS) { check(allocator); checkNT1(pStatsString); } nvmaFreeStatsString(allocator, memAddress(pStatsString)); } public static void vmaFreeStatsString(@NativeType("VmaAllocator") long allocator, @NativeType("char *") CharSequence pStatsString) { if (CHECKS) { check(allocator); } MemoryStack stack = stackGet(); int stackPointer = stack.getPointer(); try { ByteBuffer pStatsStringEncoded = stack.ASCII(pStatsString); nvmaFreeStatsString(allocator, memAddress(pStatsStringEncoded)); } finally { stack.setPointer(stackPointer); } } // --- [ vmaFindMemoryTypeIndex ] --- /** Unsafe version of: {@link #vmaFindMemoryTypeIndex FindMemoryTypeIndex} */ public static native int nvmaFindMemoryTypeIndex(long allocator, int memoryTypeBits, long pAllocationCreateInfo, long pMemoryTypeIndex); /** * Helps to find {@code memoryTypeIndex}, given {@code memoryTypeBits} and {@link VmaAllocationCreateInfo}. * *

This algorithm tries to find a memory type that:

* *
    *
  • Is allowed by {@code memoryTypeBits}.
  • *
  • Contains all the flags from {@code pAllocationCreateInfo->requiredFlags}.
  • *
  • Matches intended usage.
  • *
  • Has as many flags from {@code pAllocationCreateInfo->preferredFlags} as possible.
  • *
* * @return {@code VK_ERROR_FEATURE_NOT_PRESENT} if not found. * *

Receiving such result from this function or any other allocating function probably means that your device doesn't support any memory type with * requested features for the specific type of resource you want to use it for. Please check parameters of your resource, like image layout * ({@code OPTIMAL} versus LINEAR) or mip level count.

*/ @NativeType("VkResult") public static int vmaFindMemoryTypeIndex(@NativeType("VmaAllocator") long allocator, @NativeType("uint32_t") int memoryTypeBits, @NativeType("VmaAllocationCreateInfo const *") VmaAllocationCreateInfo pAllocationCreateInfo, @NativeType("uint32_t *") IntBuffer pMemoryTypeIndex) { if (CHECKS) { check(allocator); check(pMemoryTypeIndex, 1); VmaAllocationCreateInfo.validate(pAllocationCreateInfo.address()); } return nvmaFindMemoryTypeIndex(allocator, memoryTypeBits, pAllocationCreateInfo.address(), memAddress(pMemoryTypeIndex)); } // --- [ vmaFindMemoryTypeIndexForBufferInfo ] --- /** Unsafe version of: {@link #vmaFindMemoryTypeIndexForBufferInfo FindMemoryTypeIndexForBufferInfo} */ public static native int nvmaFindMemoryTypeIndexForBufferInfo(long allocator, long pBufferCreateInfo, long pAllocationCreateInfo, long pMemoryTypeIndex); /** * Helps to find {@code memoryTypeIndex}, given {@code VkBufferCreateInfo} and {@link VmaAllocationCreateInfo}. * *

It can be useful e.g. to determine value to be used as {@link VmaPoolCreateInfo}{@code ::memoryTypeIndex}. It internally creates a temporary, dummy buffer * that never has memory bound. It is just a convenience function, equivalent to calling:

* *
    *
  • {@code vkCreateBuffer}
  • *
  • {@code vkGetBufferMemoryRequirements}
  • *
  • {@link #vmaFindMemoryTypeIndex FindMemoryTypeIndex}
  • *
  • {@code vkDestroyBuffer}
  • *
*/ @NativeType("VkResult") public static int vmaFindMemoryTypeIndexForBufferInfo(@NativeType("VmaAllocator") long allocator, @NativeType("VkBufferCreateInfo const *") VkBufferCreateInfo pBufferCreateInfo, @NativeType("VmaAllocationCreateInfo const *") VmaAllocationCreateInfo pAllocationCreateInfo, @NativeType("uint32_t *") IntBuffer pMemoryTypeIndex) { if (CHECKS) { check(allocator); check(pMemoryTypeIndex, 1); VkBufferCreateInfo.validate(pBufferCreateInfo.address()); VmaAllocationCreateInfo.validate(pAllocationCreateInfo.address()); } return nvmaFindMemoryTypeIndexForBufferInfo(allocator, pBufferCreateInfo.address(), pAllocationCreateInfo.address(), memAddress(pMemoryTypeIndex)); } // --- [ vmaFindMemoryTypeIndexForImageInfo ] --- /** Unsafe version of: {@link #vmaFindMemoryTypeIndexForImageInfo FindMemoryTypeIndexForImageInfo} */ public static native int nvmaFindMemoryTypeIndexForImageInfo(long allocator, long pImageCreateInfo, long pAllocationCreateInfo, long pMemoryTypeIndex); /** * Helps to find {@code memoryTypeIndex}, given {@code VkImageCreateInfo} and {@link VmaAllocationCreateInfo}. * *

It can be useful e.g. to determine value to be used as {@link VmaPoolCreateInfo}{@code ::memoryTypeIndex}. It internally creates a temporary, dummy image * that never has memory bound. It is just a convenience function, equivalent to calling:

* *
    *
  • {@code vkCreateImage}
  • *
  • {@code vkGetImageMemoryRequirements}
  • *
  • {@link #vmaFindMemoryTypeIndex FindMemoryTypeIndex}
  • *
  • {@code vkDestroyImage}
  • *
*/ @NativeType("VkResult") public static int vmaFindMemoryTypeIndexForImageInfo(@NativeType("VmaAllocator") long allocator, @NativeType("VkImageCreateInfo const *") VkImageCreateInfo pImageCreateInfo, @NativeType("VmaAllocationCreateInfo const *") VmaAllocationCreateInfo pAllocationCreateInfo, @NativeType("uint32_t *") IntBuffer pMemoryTypeIndex) { if (CHECKS) { check(allocator); check(pMemoryTypeIndex, 1); VkImageCreateInfo.validate(pImageCreateInfo.address()); VmaAllocationCreateInfo.validate(pAllocationCreateInfo.address()); } return nvmaFindMemoryTypeIndexForImageInfo(allocator, pImageCreateInfo.address(), pAllocationCreateInfo.address(), memAddress(pMemoryTypeIndex)); } // --- [ vmaCreatePool ] --- /** Unsafe version of: {@link #vmaCreatePool CreatePool} */ public static native int nvmaCreatePool(long allocator, long pCreateInfo, long pPool); /** * Allocates Vulkan device memory and creates {@code VmaPool} object. * * @param allocator Allocator object * @param pCreateInfo parameters of pool to create * @param pPool handle to created pool */ @NativeType("VkResult") public static int vmaCreatePool(@NativeType("VmaAllocator") long allocator, @NativeType("VmaPoolCreateInfo const *") VmaPoolCreateInfo pCreateInfo, @NativeType("VmaPool *") PointerBuffer pPool) { if (CHECKS) { check(allocator); check(pPool, 1); } return nvmaCreatePool(allocator, pCreateInfo.address(), memAddress(pPool)); } // --- [ vmaDestroyPool ] --- /** Unsafe version of: {@link #vmaDestroyPool DestroyPool} */ public static native void nvmaDestroyPool(long allocator, long pool); /** Destroys {@code VmaPool} object and frees Vulkan device memory. */ public static void vmaDestroyPool(@NativeType("VmaAllocator") long allocator, @NativeType("VmaPool") long pool) { if (CHECKS) { check(allocator); check(pool); } nvmaDestroyPool(allocator, pool); } // --- [ vmaGetPoolStats ] --- /** Unsafe version of: {@link #vmaGetPoolStats GetPoolStats} */ public static native void nvmaGetPoolStats(long allocator, long pool, long pPoolStats); /** * Retrieves statistics of existing VmaPool object. * * @param allocator Allocator object * @param pool pool object * @param pPoolStats statistics of specified pool */ public static void vmaGetPoolStats(@NativeType("VmaAllocator") long allocator, @NativeType("VmaPool") long pool, @NativeType("VmaPoolStats *") VmaPoolStats pPoolStats) { if (CHECKS) { check(allocator); check(pool); } nvmaGetPoolStats(allocator, pool, pPoolStats.address()); } // --- [ vmaMakePoolAllocationsLost ] --- /** Unsafe version of: {@link #vmaMakePoolAllocationsLost MakePoolAllocationsLost} */ public static native void nvmaMakePoolAllocationsLost(long allocator, long pool, long pLostAllocationCount); /** * Marks all allocations in given pool as lost if they are not used in current frame or {@link VmaPoolCreateInfo}{@code ::frameInUseCount} back from now. * * @param allocator Allocator object * @param pool pool * @param pLostAllocationCount number of allocations marked as lost. Optional - pass null if you don't need this information. */ public static void vmaMakePoolAllocationsLost(@NativeType("VmaAllocator") long allocator, @NativeType("VmaPool") long pool, @Nullable @NativeType("size_t *") PointerBuffer pLostAllocationCount) { if (CHECKS) { check(allocator); check(pool); checkSafe(pLostAllocationCount, 1); } nvmaMakePoolAllocationsLost(allocator, pool, memAddressSafe(pLostAllocationCount)); } /** * Marks all allocations in given pool as lost if they are not used in current frame or {@link VmaPoolCreateInfo}{@code ::frameInUseCount} back from now. * * @param allocator Allocator object * @param pool pool */ @NativeType("void") public static long vmaMakePoolAllocationsLost(@NativeType("VmaAllocator") long allocator, @NativeType("VmaPool") long pool) { if (CHECKS) { check(allocator); check(pool); } MemoryStack stack = stackGet(); int stackPointer = stack.getPointer(); try { PointerBuffer pLostAllocationCount = stack.callocPointer(1); nvmaMakePoolAllocationsLost(allocator, pool, memAddress(pLostAllocationCount)); return pLostAllocationCount.get(0); } finally { stack.setPointer(stackPointer); } } // --- [ vmaAllocateMemory ] --- /** Unsafe version of: {@link #vmaAllocateMemory AllocateMemory} */ public static native int nvmaAllocateMemory(long allocator, long pVkMemoryRequirements, long pCreateInfo, long pAllocation, long pAllocationInfo); /** * General purpose memory allocation. * *

You should free the memory using {@link #vmaFreeMemory FreeMemory}.

* *

It is recommended to use {@link #vmaAllocateMemoryForBuffer AllocateMemoryForBuffer}, {@link #vmaAllocateMemoryForImage AllocateMemoryForImage}, {@link #vmaCreateBuffer CreateBuffer}, {@link #vmaCreateImage CreateImage} instead whenever possible.

* * @param pAllocation handle to allocated memory * @param pAllocationInfo information about allocated memory. Optional. It can be later fetched using function {@link #vmaGetAllocationInfo GetAllocationInfo}. */ @NativeType("VkResult") public static int vmaAllocateMemory(@NativeType("VmaAllocator") long allocator, @NativeType("VkMemoryRequirements const *") VkMemoryRequirements pVkMemoryRequirements, @NativeType("VmaAllocationCreateInfo const *") VmaAllocationCreateInfo pCreateInfo, @NativeType("VmaAllocation *") PointerBuffer pAllocation, @Nullable @NativeType("VmaAllocationInfo *") VmaAllocationInfo pAllocationInfo) { if (CHECKS) { check(allocator); check(pAllocation, 1); VmaAllocationCreateInfo.validate(pCreateInfo.address()); } return nvmaAllocateMemory(allocator, pVkMemoryRequirements.address(), pCreateInfo.address(), memAddress(pAllocation), memAddressSafe(pAllocationInfo)); } // --- [ vmaAllocateMemoryForBuffer ] --- /** Unsafe version of: {@link #vmaAllocateMemoryForBuffer AllocateMemoryForBuffer} */ public static native int nvmaAllocateMemoryForBuffer(long allocator, long buffer, long pCreateInfo, long pAllocation, long pAllocationInfo); /** * Buffer memory allocation. * *

You should free the memory using {@link #vmaFreeMemory FreeMemory}.

* * @param pAllocation handle to allocated memory * @param pAllocationInfo information about allocated memory. Optional. It can be later fetched using function {@link #vmaGetAllocationInfo GetAllocationInfo}. */ @NativeType("VkResult") public static int vmaAllocateMemoryForBuffer(@NativeType("VmaAllocator") long allocator, @NativeType("VkBuffer") long buffer, @NativeType("VmaAllocationCreateInfo const *") VmaAllocationCreateInfo pCreateInfo, @NativeType("VmaAllocation *") PointerBuffer pAllocation, @Nullable @NativeType("VmaAllocationInfo *") VmaAllocationInfo pAllocationInfo) { if (CHECKS) { check(allocator); check(pAllocation, 1); VmaAllocationCreateInfo.validate(pCreateInfo.address()); } return nvmaAllocateMemoryForBuffer(allocator, buffer, pCreateInfo.address(), memAddress(pAllocation), memAddressSafe(pAllocationInfo)); } // --- [ vmaAllocateMemoryForImage ] --- /** Unsafe version of: {@link #vmaAllocateMemoryForImage AllocateMemoryForImage} */ public static native int nvmaAllocateMemoryForImage(long allocator, long image, long pCreateInfo, long pAllocation, long pAllocationInfo); /** * Function similar to {@link #vmaAllocateMemoryForBuffer AllocateMemoryForBuffer}. * * @param pAllocation handle to allocated memory * @param pAllocationInfo information about allocated memory. Optional. It can be later fetched using function {@link #vmaGetAllocationInfo GetAllocationInfo}. */ @NativeType("VkResult") public static int vmaAllocateMemoryForImage(@NativeType("VmaAllocator") long allocator, @NativeType("VkImage") long image, @NativeType("VmaAllocationCreateInfo const *") VmaAllocationCreateInfo pCreateInfo, @NativeType("VmaAllocation *") PointerBuffer pAllocation, @Nullable @NativeType("VmaAllocationInfo *") VmaAllocationInfo pAllocationInfo) { if (CHECKS) { check(allocator); check(pAllocation, 1); VmaAllocationCreateInfo.validate(pCreateInfo.address()); } return nvmaAllocateMemoryForImage(allocator, image, pCreateInfo.address(), memAddress(pAllocation), memAddressSafe(pAllocationInfo)); } // --- [ vmaFreeMemory ] --- /** Unsafe version of: {@link #vmaFreeMemory FreeMemory} */ public static native void nvmaFreeMemory(long allocator, long allocation); /** Frees memory previously allocated using {@link #vmaAllocateMemory AllocateMemory}, {@link #vmaAllocateMemoryForBuffer AllocateMemoryForBuffer}, or {@link #vmaAllocateMemoryForImage AllocateMemoryForImage}. */ public static void vmaFreeMemory(@NativeType("VmaAllocator") long allocator, @NativeType("VmaAllocation") long allocation) { if (CHECKS) { check(allocator); check(allocation); } nvmaFreeMemory(allocator, allocation); } // --- [ vmaGetAllocationInfo ] --- /** Unsafe version of: {@link #vmaGetAllocationInfo GetAllocationInfo} */ public static native void nvmaGetAllocationInfo(long allocator, long allocation, long pAllocationInfo); /** * Returns current information about specified allocation and atomically marks it as used in current frame. * *

Current parameters of given allocation are returned in {@code pAllocationInfo}.

* *

This function also atomically "touches" allocation - marks it as used in current frame, just like {@link #vmaTouchAllocation TouchAllocation}. If the allocation is in lost * state, {@code pAllocationInfo->deviceMemory == VK_NULL_HANDLE}.

* *

Although this function uses atomics and doesn't lock any mutex, so it should be quite efficient, you can avoid calling it too often.

* *
    *
  • You can retrieve same {@link VmaAllocationInfo} structure while creating your resource, from function {@link #vmaCreateBuffer CreateBuffer}, {@link #vmaCreateImage CreateImage}. You can remember * it if you are sure parameters don't change (e.g. due to defragmentation or allocation becoming lost).
  • *
  • If you just want to check if allocation is not lost, {@link #vmaTouchAllocation TouchAllocation} will work faster.
  • *
*/ public static void vmaGetAllocationInfo(@NativeType("VmaAllocator") long allocator, @NativeType("VmaAllocation") long allocation, @NativeType("VmaAllocationInfo *") VmaAllocationInfo pAllocationInfo) { if (CHECKS) { check(allocator); check(allocation); } nvmaGetAllocationInfo(allocator, allocation, pAllocationInfo.address()); } // --- [ vmaTouchAllocation ] --- /** Unsafe version of: {@link #vmaTouchAllocation TouchAllocation} */ public static native int nvmaTouchAllocation(long allocator, long allocation); /** * Returns {@code VK_TRUE} if allocation is not lost and atomically marks it as used in current frame. * *

If the allocation has been created with {@link #VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT ALLOCATION_CREATE_CAN_BECOME_LOST_BIT} flag, this function returns {@code VK_TRUE} if it's not in lost state, * so it can still be used. It then also atomically "touches" the allocation - marks it as used in current frame, so that you can be sure it won't become * lost in current frame or next {@code frameInUseCount} frames.

* *

If the allocation is in lost state, the function returns {@code VK_FALSE}. Memory of such allocation, as well as buffer or image bound to it, should * not be used. Lost allocation and the buffer/image still need to be destroyed.

* *

If the allocation has been created without {@link #VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT ALLOCATION_CREATE_CAN_BECOME_LOST_BIT} flag, this function always returns {@code VK_TRUE}.

*/ @NativeType("VkBool32") public static boolean vmaTouchAllocation(@NativeType("VmaAllocator") long allocator, @NativeType("VmaAllocation") long allocation) { if (CHECKS) { check(allocator); check(allocation); } return nvmaTouchAllocation(allocator, allocation) != 0; } // --- [ vmaSetAllocationUserData ] --- /** Unsafe version of: {@link #vmaSetAllocationUserData SetAllocationUserData} */ public static native void nvmaSetAllocationUserData(long allocator, long allocation, long pUserData); /** * Sets {@code pUserData} in given allocation to new value. * *

If the allocation was created with {@link #VMA_ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT ALLOCATION_CREATE_USER_DATA_COPY_STRING_BIT}, {@code pUserData} must be either null, or pointer to a null-terminated * string. The function makes local copy of the string and sets it as allocation's {@code pUserData}. String passed as {@code pUserData} doesn't need to * be valid for whole lifetime of the allocation - you can free it after this call. String previously pointed by allocation's {@code pUserData} is freed * from memory.

* *

If the flag was not used, the value of pointer {@code pUserData} is just copied to allocation's {@code pUserData}. It is opaque, so you can use it * however you want - e.g. as a pointer, ordinal number or some handle to you own data.

*/ public static void vmaSetAllocationUserData(@NativeType("VmaAllocator") long allocator, @NativeType("VmaAllocation") long allocation, @NativeType("void *") long pUserData) { if (CHECKS) { check(allocator); check(allocation); check(pUserData); } nvmaSetAllocationUserData(allocator, allocation, pUserData); } // --- [ vmaCreateLostAllocation ] --- /** Unsafe version of: {@link #vmaCreateLostAllocation CreateLostAllocation} */ public static native void nvmaCreateLostAllocation(long allocator, long pAllocation); /** * Creates new allocation that is in lost state from the beginning. * *

It can be useful if you need a dummy, non-null allocation.

* *

You still need to destroy created object using {@link #vmaFreeMemory FreeMemory}.

* *

Returned allocation is not tied to any specific memory pool or memory type and not bound to any image or buffer. It has size = 0. It cannot be turned * into a real, non-empty allocation.

*/ public static void vmaCreateLostAllocation(@NativeType("VmaAllocator") long allocator, @NativeType("VmaAllocation *") PointerBuffer pAllocation) { if (CHECKS) { check(allocator); check(pAllocation, 1); } nvmaCreateLostAllocation(allocator, memAddress(pAllocation)); } // --- [ vmaMapMemory ] --- /** Unsafe version of: {@link #vmaMapMemory MapMemory} */ public static native int nvmaMapMemory(long allocator, long allocation, long ppData); /** * Maps memory represented by given allocation and returns pointer to it. * *

Maps memory represented by given allocation to make it accessible to CPU code. When succeeded, {@code *ppData} contains pointer to first byte of this * memory. If the allocation is part of bigger {@code VkDeviceMemory} block, the pointer is correctly offseted to the beginning of region assigned to this * particular allocation.

* *

Mapping is internally reference-counted and synchronized, so despite raw Vulkan function {@code vkMapMemory()} cannot be used to map same block of * {@code VkDeviceMemory} multiple times simultaneously, it is safe to call this function on allocations assigned to the same memory block. Actual Vulkan * memory will be mapped on first mapping and unmapped on last unmapping.

* *

If the function succeeded, you must call {@link #vmaUnmapMemory UnmapMemory} to unmap the allocation when mapping is no longer needed or before freeing the allocation, at * the latest.

* *

It also safe to call this function multiple times on the same allocation. You must call {@code vmaUnmapMemory()} same number of times as you called * {@code vmaMapMemory()}.

* *

It is also safe to call this function on allocation created with {@link #VMA_ALLOCATION_CREATE_MAPPED_BIT ALLOCATION_CREATE_MAPPED_BIT} flag. Its memory stays mapped all the time. You must * still call {@code vmaUnmapMemory()} same number of times as you called {@code vmaMapMemory()}. You must not call {@code vmaUnmapMemory()} additional * time to free the "0-th" mapping made automatically due to {@link #VMA_ALLOCATION_CREATE_MAPPED_BIT ALLOCATION_CREATE_MAPPED_BIT} flag.

* *

This function fails when used on allocation made in memory type that is not {@code HOST_VISIBLE}.

* *

This function always fails when called for allocation that was created with {@link #VMA_ALLOCATION_CREATE_CAN_BECOME_LOST_BIT ALLOCATION_CREATE_CAN_BECOME_LOST_BIT} flag. Such allocations cannot be * mapped.

*/ @NativeType("VkResult") public static int vmaMapMemory(@NativeType("VmaAllocator") long allocator, @NativeType("VmaAllocation") long allocation, @NativeType("void **") PointerBuffer ppData) { if (CHECKS) { check(allocator); check(allocation); check(ppData, 1); } return nvmaMapMemory(allocator, allocation, memAddress(ppData)); } // --- [ vmaUnmapMemory ] --- /** Unsafe version of: {@link #vmaUnmapMemory UnmapMemory} */ public static native void nvmaUnmapMemory(long allocator, long allocation); /** * Unmaps memory represented by given allocation, mapped previously using {@link #vmaMapMemory MapMemory}. * *

For details, see description of {@code vmaMapMemory()}.

*/ public static void vmaUnmapMemory(@NativeType("VmaAllocator") long allocator, @NativeType("VmaAllocation") long allocation) { if (CHECKS) { check(allocator); check(allocation); } nvmaUnmapMemory(allocator, allocation); } // --- [ vmaDefragment ] --- /** * Unsafe version of: {@link #vmaDefragment Defragment} * * @param allocationCount number of elements in {@code pAllocations} and {@code pAllocationsChanged} arrays */ public static native int nvmaDefragment(long allocator, long pAllocations, long allocationCount, long pAllocationsChanged, long pDefragmentationInfo, long pDefragmentationStats); /** * Compacts memory by moving allocations. * *

This function works by moving allocations to different places (different {@code VkDeviceMemory} objects and/or different offsets) in order to optimize * memory usage. Only allocations that are in {@code pAllocations} array can be moved. All other allocations are considered nonmovable in this call. Basic * rules:

* *
    *
  • Only allocations made in memory types that have {@code VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT} flag can be compacted. You may pass other allocations * but it makes no sense - these will never be moved.
  • *
  • You may pass allocations made with {@link #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT ALLOCATION_CREATE_DEDICATED_MEMORY_BIT} but it makes no sense - they will never be moved.
  • *
  • Both allocations made with or without {@link #VMA_ALLOCATION_CREATE_MAPPED_BIT ALLOCATION_CREATE_MAPPED_BIT} flag can be compacted. If not persistently mapped, memory will be mapped * temporarily inside this function if needed.
  • *
  • You must not pass same {@code VmaAllocation} object multiple times in {@code pAllocations} array.
  • *
* *

The function also frees empty {@code VkDeviceMemory} blocks.

* *

After allocation has been moved, its {@link VmaAllocationInfo}{@code ::deviceMemory} and/or {@link VmaAllocationInfo}{@code ::offset} changes. You must query them * again using {@link #vmaGetAllocationInfo GetAllocationInfo} if you need them.

* *

If an allocation has been moved, data in memory is copied to new place automatically, but if it was bound to a buffer or an image, you must destroy * that object yourself, create new one and bind it to the new memory pointed by the allocation. You must use {@code vkDestroyBuffer()}, * {@code vkDestroyImage()}, {@code vkCreateBuffer()}, {@code vkCreateImage()} for that purpose and NOT {@link #vmaDestroyBuffer DestroyBuffer}, {@link #vmaDestroyImage DestroyImage}, * {@link #vmaCreateBuffer CreateBuffer}, {@link #vmaCreateImage CreateImage}! Example:

* *

     * VkDevice device = ...;
     * VmaAllocator allocator = ...;
     * std::vector<VkBuffer> buffers = ...;
     * std::vector<VmaAllocation> allocations = ...;
     * 
     * std::vector<VkBool32> allocationsChanged(allocations.size());
     * vmaDefragment(allocator, allocations.data(), allocations.size(), allocationsChanged.data(), nullptr, nullptr);
     * 
     * for(size_t i = 0; i < allocations.size(); ++i)
     * {
     *     if(allocationsChanged[i])
     *     {
     *         VmaAllocationInfo allocInfo;
     *         vmaGetAllocationInfo(allocator, allocations[i], &allocInfo);
     * 
     *         vkDestroyBuffer(device, buffers[i], nullptr);
     * 
     *         VkBufferCreateInfo bufferInfo = ...;
     *         vkCreateBuffer(device, &bufferInfo, nullptr, &buffers[i]);
     * 
     *         // You can make dummy call to vkGetBufferMemoryRequirements here to silence validation layer warning.
     * 
     *         vkBindBufferMemory(device, buffers[i], allocInfo.deviceMemory, allocInfo.offset);
     *     }
     * }
* *

Note: Please don't expect memory to be fully compacted after this call. Algorithms inside are based on some heuristics that try to maximize number of * Vulkan memory blocks to make totally empty to release them, as well as to maximimze continuous empty space inside remaining blocks, while minimizing * the number and size of data that needs to be moved. Some fragmentation still remains after this call. This is normal.

* *

Warning: This function is not 100% correct according to Vulkan specification. Use it at your own risk. That's because Vulkan doesn't guarantee that * memory requirements (size and alignment) for a new buffer or image are consistent. They may be different even for subsequent calls with the same * parameters. It really does happen on some platforms, especially with images.

* *

Warning: This function may be time-consuming, so you shouldn't call it too often (like every frame or after every resource creation/destruction). You * can call it on special occasions (like when reloading a game level or when you just destroyed a lot of objects).

* * @param pAllocations array of allocations that can be moved during this compaction * @param pAllocationsChanged array of boolean values that will indicate whether matching allocation in {@code pAllocations} array has been moved. This parameter is optional. * Pass null if you don't need this information. * @param pDefragmentationInfo configuration parameters. Optional - pass null to use default values. * @param pDefragmentationStats statistics returned by the function. Optional - pass null if you don't need this information. * * @return {@code VK_SUCCESS} if completed, {@code VK_INCOMPLETE} if succeeded but didn't make all possible optimizations because limits specified in * {@code pDefragmentationInfo} have been reached, negative error code in case of error. */ @NativeType("VkResult") public static int vmaDefragment(@NativeType("VmaAllocator") long allocator, @NativeType("VmaAllocation *") PointerBuffer pAllocations, @Nullable @NativeType("VkBool32 *") IntBuffer pAllocationsChanged, @Nullable @NativeType("VmaDefragmentationInfo const *") VmaDefragmentationInfo pDefragmentationInfo, @Nullable @NativeType("VmaDefragmentationStats *") VmaDefragmentationStats pDefragmentationStats) { if (CHECKS) { check(allocator); checkSafe(pAllocationsChanged, pAllocations.remaining()); } return nvmaDefragment(allocator, memAddress(pAllocations), pAllocations.remaining(), memAddressSafe(pAllocationsChanged), memAddressSafe(pDefragmentationInfo), memAddressSafe(pDefragmentationStats)); } // --- [ vmaBindBufferMemory ] --- /** Unsafe version of: {@link #vmaBindBufferMemory BindBufferMemory} */ public static native int nvmaBindBufferMemory(long allocator, long allocation, long buffer); /** * Binds buffer to allocation. * *

Binds specified buffer to region of memory represented by specified allocation. Gets {@code VkDeviceMemory} handle and offset from the allocation. If * you want to create a buffer, allocate memory for it and bind them together separately, you should use this function for binding instead of standard * {@code vkBindBufferMemory()}, because it ensures proper synchronization so that when a {@code VkDeviceMemory} object is used by multiple allocations, * calls to {@code vkBind*Memory()} or {@code vkMapMemory()} won't happen from multiple threads simultaneously (which is illegal in Vulkan).

* *

It is recommended to use function {@link #vmaCreateBuffer CreateBuffer} instead of this one.

*/ @NativeType("VkResult") public static int vmaBindBufferMemory(@NativeType("VmaAllocator") long allocator, @NativeType("VmaAllocation") long allocation, @NativeType("VkBuffer") long buffer) { if (CHECKS) { check(allocator); check(allocation); } return nvmaBindBufferMemory(allocator, allocation, buffer); } // --- [ vmaBindImageMemory ] --- /** Unsafe version of: {@link #vmaBindImageMemory BindImageMemory} */ public static native int nvmaBindImageMemory(long allocator, long allocation, long image); /** * Binds image to allocation. * *

Binds specified image to region of memory represented by specified allocation. Gets {@code VkDeviceMemory} handle and offset from the allocation. If * you want to create an image, allocate memory for it and bind them together separately, you should use this function for binding instead of standard * {@code vkBindImageMemory()}, because it ensures proper synchronization so that when a {@code VkDeviceMemory} object is used by multiple allocations, * calls to {@code vkBind*Memory()} or {@code vkMapMemory()} won't happen from multiple threads simultaneously (which is illegal in Vulkan).

* *

It is recommended to use function vmaCreateImage() instead of this one.

*/ @NativeType("VkResult") public static int vmaBindImageMemory(@NativeType("VmaAllocator") long allocator, @NativeType("VmaAllocation") long allocation, @NativeType("VkImage") long image) { if (CHECKS) { check(allocator); check(allocation); } return nvmaBindImageMemory(allocator, allocation, image); } // --- [ vmaCreateBuffer ] --- /** Unsafe version of: {@link #vmaCreateBuffer CreateBuffer} */ public static native int nvmaCreateBuffer(long allocator, long pBufferCreateInfo, long pAllocationCreateInfo, long pBuffer, long pAllocation, long pAllocationInfo); /** * This function automatically: * *
    *
  • Creates buffer.
  • *
  • Allocates appropriate memory for it.
  • *
  • Binds the buffer with the memory.
  • *
* *

If any of these operations fail, buffer and allocation are not created, returned value is negative error code, {@code *pBuffer} and * {@code *pAllocation} are null.

* *

If the function succeeded, you must destroy both buffer and allocation when you no longer need them using either convenience function {@link #vmaDestroyBuffer DestroyBuffer} * or separately, using {@code vkDestroyBuffer()} and {@link #vmaFreeMemory FreeMemory}.

* *

If {@link #VMA_ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT ALLOCATOR_CREATE_KHR_DEDICATED_ALLOCATION_BIT} flag was used, {@code VK_KHR_dedicated_allocation} extension is used internally to query driver * whether it requires or prefers the new buffer to have dedicated allocation. If yes, and if dedicated allocation is possible * {@link VmaAllocationCreateInfo}{@code ::pool} is null and {@link #VMA_ALLOCATION_CREATE_NEVER_ALLOCATE_BIT ALLOCATION_CREATE_NEVER_ALLOCATE_BIT} is not used), it creates dedicated allocation for this * buffer, just like when using {@link #VMA_ALLOCATION_CREATE_DEDICATED_MEMORY_BIT ALLOCATION_CREATE_DEDICATED_MEMORY_BIT}.

* * @param pBuffer buffer that was created * @param pAllocation allocation that was created * @param pAllocationInfo information about allocated memory. Optional. It can be later fetched using function {@link #vmaGetAllocationInfo GetAllocationInfo}. */ @NativeType("VkResult") public static int vmaCreateBuffer(@NativeType("VmaAllocator") long allocator, @NativeType("VkBufferCreateInfo const *") VkBufferCreateInfo pBufferCreateInfo, @NativeType("VmaAllocationCreateInfo const *") VmaAllocationCreateInfo pAllocationCreateInfo, @NativeType("VkBuffer *") LongBuffer pBuffer, @NativeType("VmaAllocation *") PointerBuffer pAllocation, @Nullable @NativeType("VmaAllocationInfo *") VmaAllocationInfo pAllocationInfo) { if (CHECKS) { check(allocator); check(pBuffer, 1); check(pAllocation, 1); VkBufferCreateInfo.validate(pBufferCreateInfo.address()); VmaAllocationCreateInfo.validate(pAllocationCreateInfo.address()); } return nvmaCreateBuffer(allocator, pBufferCreateInfo.address(), pAllocationCreateInfo.address(), memAddress(pBuffer), memAddress(pAllocation), memAddressSafe(pAllocationInfo)); } // --- [ vmaDestroyBuffer ] --- /** Unsafe version of: {@link #vmaDestroyBuffer DestroyBuffer} */ public static native void nvmaDestroyBuffer(long allocator, long buffer, long allocation); /** * Destroys Vulkan buffer and frees allocated memory. * *

This is just a convenience function equivalent to:

* *

     * vkDestroyBuffer(device, buffer, allocationCallbacks);
     * vmaFreeMemory(allocator, allocation);
* *

It it safe to pass null as buffer and/or allocation.

*/ public static void vmaDestroyBuffer(@NativeType("VmaAllocator") long allocator, @NativeType("VkBuffer") long buffer, @NativeType("VmaAllocation") long allocation) { if (CHECKS) { check(allocator); check(allocation); } nvmaDestroyBuffer(allocator, buffer, allocation); } // --- [ vmaCreateImage ] --- /** Unsafe version of: {@link #vmaCreateImage CreateImage} */ public static native int nvmaCreateImage(long allocator, long pImageCreateInfo, long pAllocationCreateInfo, long pImage, long pAllocation, long pAllocationInfo); /** * Function similar to {@link #vmaCreateBuffer CreateBuffer}. * * @param pImage image that was created * @param pAllocation allocation that was created * @param pAllocationInfo information about allocated memory. Optional. It can be later fetched using function {@link #vmaGetAllocationInfo GetAllocationInfo}. */ @NativeType("VkResult") public static int vmaCreateImage(@NativeType("VmaAllocator") long allocator, @NativeType("VkImageCreateInfo const *") VkImageCreateInfo pImageCreateInfo, @NativeType("VmaAllocationCreateInfo const *") VmaAllocationCreateInfo pAllocationCreateInfo, @NativeType("VkImage *") LongBuffer pImage, @NativeType("VmaAllocation *") PointerBuffer pAllocation, @Nullable @NativeType("VmaAllocationInfo *") VmaAllocationInfo pAllocationInfo) { if (CHECKS) { check(allocator); check(pImage, 1); check(pAllocation, 1); VkImageCreateInfo.validate(pImageCreateInfo.address()); VmaAllocationCreateInfo.validate(pAllocationCreateInfo.address()); } return nvmaCreateImage(allocator, pImageCreateInfo.address(), pAllocationCreateInfo.address(), memAddress(pImage), memAddress(pAllocation), memAddressSafe(pAllocationInfo)); } // --- [ vmaDestroyImage ] --- /** Unsafe version of: {@link #vmaDestroyImage DestroyImage} */ public static native void nvmaDestroyImage(long allocator, long image, long allocation); /** * Destroys Vulkan image and frees allocated memory. * *

This is just a convenience function equivalent to:

* *

     * vkDestroyImage(device, image, allocationCallbacks);
     * vmaFreeMemory(allocator, allocation);
* *

It it safe to pass null as image and/or allocation.

*/ public static void vmaDestroyImage(@NativeType("VmaAllocator") long allocator, @NativeType("VkImage") long image, @NativeType("VmaAllocation") long allocation) { if (CHECKS) { check(allocator); check(allocation); } nvmaDestroyImage(allocator, image, allocation); } }




© 2015 - 2024 Weber Informatics LLC | Privacy Policy