I think there are many issues with binary compatibility beyond function inlining. An ODP application cannot expect all ODP implementations to support the same number of ODP queues or classification rules or even which classification terms (fields) are supported (efficiently/in HW) etc. Is there some kind of lowest common denominator an application should expect? Do we want to make guarantees of an ODP implementation stricter? What are the consequences of such strict functional guarantees?
I think an application that requires binary compatibility over ARMv8.1 platforms should compile and link against a specific ODP SW implementation (possibly with some well-defined HW offloads where the underlying platform can provide the relevant drivers). I.e. more of a (user-space) Linux architecture than standard ODP (as influenced by OpenGL). The important binary interfaces then becomes the interfaces to these offloads/drivers.
On 16 November 2015 at 14:23, Nicolas Morey-Chaisemartin nmorey@kalray.eu wrote:
On 11/11/2015 09:45 AM, Savolainen, Petri (Nokia - FI/Espoo) wrote:
-----Original Message----- From: lng-odp [mailto:lng-odp-bounces@lists.linaro.org] On Behalf Of EXT Nicolas Morey-Chaisemartin Sent: Tuesday, November 10, 2015 5:13 PM To: Zoltan Kiss; linaro-toolchain@lists.linaro.org Cc: lng-odp Subject: Re: [lng-odp] Runtime inlining
As I said in the call last week, the problem is wider than that.
ODP specifies a lot of types but not their sizes, a lot of enums/defines (things like ODP_PKTIO_INVALID) but not their value either. For our port a lot of those values were changed for performance/implementation reason. So I'm not even compatible between one version of our ODP port and another one.
The only way I can see to solve this is for ODP to fix the size of all these types. Default/Invalid values are not that easy, as a pointer would have a completely different behaviour from structs/bitfields
Nicolas
Type sizes do not need to be fixed in general, but only when an application is build for binary compatibility (the use case we are talking here). Binary compatibility and thus the fixed type sizes are defined per ISA.
We can e.g. define a configure target (for our reference implementation == linux-generic) "--binary-compatible=armv8.x" or "--binary-compatible=x86_64". When you build your application with that option, "platform dependent" types and constants would be fixed to pre-defined values specified in (new) ODP API arch files.
So instead of building against odp/platform/linux-generic/include/odp/plat/queue_types.h ...
typedef ODP_HANDLE_T(odp_queue_t); #define ODP_QUEUE_INVALID _odp_cast_scalar(odp_queue_t, 0) #define ODP_QUEUE_NAME_LEN 32
... you'd build against odp/arch/armv8.x/include/odp/queue_types.h ...
typedef uintptr_t odp_queue_t; #define ODP_QUEUE_INVALID ((uintptr_t)0) #define ODP_QUEUE_NAME_LEN 64
... or odp/arch/x86_64/include/odp/queue_types.h
typedef uint64_t odp_queue_t; #define ODP_QUEUE_INVALID ((uint64_t)0xffffffffffffffff) #define ODP_QUEUE_NAME_LEN 32
For highest performance on a fixed target platform, you'd still build against the platform directly
odp/platform/<soc_vendor_xyz>/include/odp/plat/queue_types.h
typedef xyz_queue_desc_t * odp_queue_t; #define ODP_QUEUE_INVALID ((xyz_queue_desc_t *)0xdeadbeef) #define ODP_QUEUE_NAME_LEN 20
-Petri
It still means that you need to enforce a type for all ODP implementation on a given arch. Which could be problematic. As a precise example: the way handles are used now for odp_packet_t brings some useful features for checks and memory savings, but performance wise, they are a "disaster". One of the first thing I did was to switch them to pointers. And if I wanted a high perf linux x86_64 implementation, I'd probably do the same.
Nicolas _______________________________________________ lng-odp mailing list lng-odp@lists.linaro.org https://lists.linaro.org/mailman/listinfo/lng-odp