Using the Phonetic Search Type
When indexing an asset that includes multiple audio tracks, Phonetic Index reads the first audio track only by default. If you want MediaCentral Search to return phonetic data for additional audio tracks, you must configure settings in your source asset management system.
Audio (phonetic) metadata is created through a background index process which scans the assets at a rate of approximately 50x to 200x real time. When you perform a search, you are not searching the actual audio assets, but rather an index of the audio. During normal system operation, the index service continues to scan for new assets. While the process is extremely fast, it is possible that audio metadata might not be immediately available for newly ingested assets.
Phonetic searches work best with fewer terms. In this release of MediaCentral Cloud UX, you are limited to a maximum of ten Phonetic search terms. If you create more than ten terms, the Search Results area displays a message that asks you to delete the additional Phonetic terms. When adding multiple Phonetic terms, all must be of the same type (fast or accurate). If you attempt to add a different phonetic search type, the system asks if you want to maintain the original search type, or switch to the new one.
Assets must be fully online and available before the audio can be indexed and discovered through a phonetic search. Phonetic search results are returned only for online master clips and not for subclips, sequences, or other asset types.
Conventional vs Large-Scale Searches
Conventional searches perform an in-depth analysis of the indexed data to produce the most accurate results possible. Large-scale searches are often conducted on systems that include many hours of indexed media. These searches use phonetic digests to expedite the speed at which the results are returned. As a result, large-scale searches might return fewer assets because some phonetic hits might be missed by this faster search algorithm.
When you conduct a Phonetic (Fast) search, MediaCentral Cloud UX might conduct either a conventional or a large-scale search. The automatic selection process is partially determined by your search criteria and by the Large Scale Threshold value that your administrator entered when configuring the system (200 hours by default). If your search exceeds that value, the Fast search conducts a large-scale search automatically.
When you conduct a Phonetic (Accurate) search, the system conducts a conventional search only and it does not switch to a large-scale search. If your phonetic index includes a large number of assets, your search could take longer to complete, but the results will be more accurate. You can increase the speed at the Search app returns results by including additional search criteria.
Consider the following use-case in which you know that you have an asset that was captured just two days ago where the subject says the word “London”.
• You add a single Phonetic (fast) term to the search:
The engine most likely performs a large-scale search because it must search across the entire phonetic index for this term. However in this case, the search does not return the asset because the subject did not say the matching term clearly enough.
Result: The Search app finds zero assets.
• In this updated example, you add a date-range qualifier to the search:
The Created date range reduces the number of hours of indexed media below the 200 hour threshold. Since the search includes far fewer assets, the engine conducts a conventional search. In this case the conventional search’s deep analysis of the index includes the assets in the search results.
Result: The Search app finds 10 assets.
• As a final example, you change the search term to Phonetic (accurate):
Result: The Search app finds the same 10 assets as the previous. Since this search was required to perform an in-depth analysis of the entire search index, results were most likely slower to return than the previous search.
Phonetic Language Options
During the system configuration process, your administrator might have installed one or more language packs on your MediaCentral Cloud UX system. Language packs provide a basic structure for Phonetic Index to read your audio media. For instance, there might be different dialects for Canadian French as compared to European French. Depending on your workflow, you might even have some media that includes multiple audio tracks with different languages on each track. Language packs help to properly analyze the spoken words in your audio media to provide you with more accurate search results.
Each phonetic term includes a menu on the far right-side of the pill that allows you to select a specific language. If your system includes only one language pack, that language will be the only available selection in the menu. The following illustration shows the language selection menu with three different options.
Normally defaulting to International English (EN), your administrator can alter the default language that appears in the phonetic search pill based on your local preferences. This menu option affects phonetic searches only. The language that you select on either the MediaCentral Cloud UX welcome screen or in the User Settings has no bearing on your search results.
If you add more than one phonetic term to the search, the last language that you select is used for all pills. If you alter the language selection for one pill, all pills are updated with this same language.