Discussion about this post

User's avatar
Jack Sanniota's avatar

Fascinating article. Thanks for sharing

Expand full comment
Huanxing Chen's avatar

Hi thanks for the amazing article! I'm currently doing research on this topic and would love to hear your thoughts regarding the necessity of using blockchain technology in the verifying authenticity of digital content in particular.

Here's how I conceptualise the current user journey:

1. Person A takes picture of a scenery with his phone, uses his private key to create a cryptographic hash of the image. Anyone using their public key can check against the hash to verify that this image indeed originated from person A.

2. The cryptographic hash of the image can be stored on a blockchain. Blockchain is immutable, meaning that any additional attempts by malicious actors to alter the cryptographic hash will fail because it'll be easily spotted and the network will not reach consensus on this particular version of the truth. It is also transparent, and further edits on the photo made by other individuals using softwares like Adobe Lightroom will be noted in the subsequent blocks, thus creating a transparent and auditable track record of edits done on the photo.

3. When Person B sees the image on the web, he can check immediately the origin of the photo and all the edits that have been made.

My immediate thoughts on this are as such:

1. Deepfake Verification: The blockchain doesn't inherently have a way to discern whether an image is a deepfake or not. Its primary function in this context is to maintain an immutable and verifiable record of ownership and modification. The detection and prevention of deepfakes would require sophisticated AI and machine learning techniques, separate from the blockchain itself. In the user journey above, if Person A creates an ultra-realistic deepfake and signs it with his private key, it can still be propagated as authentic content from Person A. Perhaps what blockchain & cryptography enables is a sort of reputation slashing mechanism to keep deepfakes in check.

2. Hardware & Software standards: To have a completely transparent and tamper-evident system, all hardware and software used in the creation and modification of content would need to follow the same standards and protocols for cryptographic signing and proof generation. This would require significant coordination and cooperation across industries. In the example above, this would mean having the metadata of the phone camera used to take the photo and that of the photo-editing softwares used being included when generating the hash. However, this also means that power will likely concentrate in the hands of the coalition setting the industry standards, and smaller hardware and software players will have to opt in as a matter of survival. The sharing of hardware and software metadata may also raise privacy issues. In the example above, maybe the metadata of the phone camera used to take the photo can be included in the hash so that Person B can verify that it's indeed not generated by AI.

3. Digital Identity Solution: A registry linking a person's digital identity to their real identity and owned devices is a prerequisite for the system to function as intended. It's a challenging problem and a sensitive one due to privacy and security concerns. This is also the part that I’m the most confused by.

Cryptographic proof provides the “fingerprint” for the digital content, and blockchain technology seems to be a fingerprint storage solution that ensures its access in an immutable and accessible solution.

Specifically, it can be used in 2 ways:

1. It enables a decentalised, immutable registry of provenance data (check if the content has been altered), &

2. It serves as the foundation on top of which a decentralised registy mapping digital identity to real-world identity is built (check who’s the one that altered it) 

This is where I’m confused. C2PA’s setting the hardware and software standard for image authentication and it doesn’t concern itself with linking user’s digital identity to their real-world identity. It also doesn't use blockchain at all in achieving this. Blockchain can be used in this regard but I don’t see any viable solutions out there yet. Will person B be able to verify that 1) person A (identity revealed) is the original creator of the image, 2) a pseudonym of person A is, or 3) a “trusted and verified” individual is the creator? Seems to me that this depends on the design of the identity system but I’m finding it hard to conceptualise how it might fit in with the standards out there like C2PA. Also is a blockchain really necessary if we can trust the centralised cloud storage provider that’s storing the cryptographic hash? I assume that the value-add of blockchain here's not solely limited to decentralisation?

Still trying to wrap my head around this topic, happy to hear your thoughts! Really appreciate it :)

Expand full comment

No posts