Reply to “1024” on WeChat public account for a special push
Interviews typically involve asking these questions (there are quite a few – -!). For mid to senior level Android developers, deeper questions will be asked along with extensions. Here are some key points that are fundamental questions that interviewers will definitely ask, please be sure to understand them!
-
Basic Knowledge – Four Major Components (Lifecycle, Usage Scenarios, How to Start)
-
Java Basics – Data Structures, Threads, MVC Framework
-
Communication – Network Connections (HttpClient, HttpURLConnection), Sockets
-
Data Persistence – SQLite, SharedPreferences, ContentProvider
-
Performance Optimization – Layout Optimization, Memory Optimization, Battery Optimization
-
Security – Data Encryption, Code Obfuscation, WebView/Js Calls, HTTPS
-
UI – Animations
-
Others – JNI, AIDL, Handler, Intent, etc.
-
Open Source Frameworks – Volley, Glide, RxJava, etc. (List what you know and have used on your resume)
-
Extensions – Features of Android 6.0/7.0/8.0/9.0, Kotlin Language, I/O Conference
Hastily sending out resumes and rushing to interviews would be better spent taking a day or two to review the above content. To secure an offer, it is best to understand the implementation principles and know the usage scenarios. Do not memorize! Understand! Interviewers are weary of hearing these contents all day, and it’s best to express some of your own insights.
Differences between Reference Types in Java, Specific Usage Scenarios
In Java, reference types are divided into four categories: Strong Reference, Soft Reference, Weak Reference, and Phantom Reference.
Strong Reference: A strong reference is a reference created by new objects, and the garbage collector will not reclaim the object pointed to by the strong reference even if memory is insufficient.
Soft Reference: A soft reference is implemented through SoftReference, and its lifecycle is shorter than that of a strong reference. When memory is insufficient and before throwing OOM, the garbage collector will reclaim objects referenced by soft references. A common usage scenario for soft references is to store some memory-sensitive caches, which will be reclaimed when memory is insufficient.
Weak Reference: A weak reference is implemented through WeakReference, and its lifecycle is even shorter than that of a soft reference. The garbage collector will reclaim the object as soon as it scans a weak reference. A common usage scenario for weak references is also to store some memory-sensitive caches.
Phantom Reference: A phantom reference is implemented through PhantomReference, and it has the shortest lifecycle, and it can be reclaimed at any time. If an object is only referenced by a phantom reference, we cannot access any properties and methods of that object through the phantom reference. Its role is merely to ensure that some actions are performed after the object is finalized. A common usage scenario for phantom references is to track the activity of objects being garbage collected, and a system notification will be received before an object associated with a phantom reference is reclaimed by the garbage collector.
Difference between Exception and Error
Exception and Error both inherit from Throwable. In Java, only objects of type Throwable can be thrown or caught; it is the basic component type of the exception handling mechanism.
Exception and Error reflect Java’s classification of different exception situations. Exception refers to unexpected situations that can be anticipated during normal program operation, which can and should be caught for corresponding handling.
Error refers to situations that are unlikely to occur under normal circumstances, and most Errors will put the program in an abnormal, unrecoverable state. Since they are abnormal, they are inconvenient and unnecessary to catch. A common example is OutOfMemoryError, which is a subclass of Error.
Exceptions are further divided into checked Exceptions and unchecked Exceptions. Checked Exceptions must be explicitly caught in the code, which is part of the compiler’s checks. Unchecked Exceptions are runtime exceptions, such as null pointer exceptions, array index out of bounds, etc., which are usually avoidable logical errors, and whether to catch them depends on the specific needs, and the compiler does not enforce this requirement.
volatile
When we mention volatile, we cannot help but mention concepts related to memory models. We all know that during program execution, each instruction is executed by the CPU, and the execution of instructions inevitably involves reading and writing data. The data during program execution is stored in main memory, which leads to a problem: since the execution speed of the CPU is much higher than the read and write speed of main memory, directly reading and writing data from main memory will reduce the efficiency of the CPU. To solve this problem, the concept of cache is introduced. Each CPU has a cache that reads data from main memory in advance and refreshes it to main memory at appropriate times after CPU operations.
This running mode has no issues in single-threaded scenarios, but in multi-threaded scenarios, it can lead to cache consistency issues. For example, consider the case of i=i+1 executed in two threads, assuming the initial value of i is 0. We expect to get 2 after both threads run, but there is a possibility that both threads read i from main memory into their respective caches, so both threads have i=0. After thread 1 executes and gets i=1, it refreshes it to main memory, and then thread 2 starts executing. Since thread 2’s i is 0 from the cache, after executing thread 2, the refreshed i in main memory is still 1.
This leads to cache consistency issues for shared variables. To solve this problem, a cache consistency protocol was proposed: when the CPU writes data, if it finds that it is operating on a shared variable, it will notify other CPUs to invalidate that shared variable in their caches. When other CPUs read the cached shared variable and find it is invalid, they will read the latest value from main memory again.
In Java multi-threaded development, there are three important concepts: atomicity, visibility, and ordering. Atomicity: one or more operations must either all execute or none execute. Visibility: modifications to shared variables (class member variables or static variables) in one thread are immediately visible to other threads. Ordering: the execution order of the program follows the order of the code. Declaring a variable as volatile ensures visibility and ordering. Visibility has been discussed above and is very necessary in multi-threaded development. As for ordering, it’s worth mentioning that for efficiency, instruction reordering may occur. In single-threaded scenarios, the output after instruction reordering remains consistent with our code logic. However, in multi-threaded scenarios, problems may arise; volatile can avoid instruction reordering to some extent.
The principle of volatile is that an additional lock prefix instruction is added in the generated assembly code. This prefix instruction acts as a memory barrier, which has three functions:
-
Ensure that during instruction reordering, instructions after the barrier are not moved before the barrier, and instructions before the barrier are not moved after the barrier.
-
Immediately refresh the modified shared variable in the cache to main memory.
-
When a write operation is performed, it invalidates the cache of other CPUs.
Network Related Interview Questions
HTTP Status Codes
What are the differences between HTTP and HTTPS? How does HTTPS work?
HTTP is Hypertext Transfer Protocol, while HTTPS can be simply understood as a secure version of HTTP. HTTPS adds a layer of SSL protocol under the HTTP protocol to encrypt data to ensure security. The main functions of HTTPS are twofold: to establish a secure information transmission channel and to ensure data transmission security; to confirm the authenticity of the website.
The main differences between HTTP and HTTPS are as follows:
-
HTTPS requires applying for a certificate from a CA, which is rarely free and thus incurs a certain cost.
-
HTTP transmits in plaintext, with low security; while HTTPS encrypts on the basis of HTTP, providing high security.
-
The default ports for the two are different: HTTP uses port 80 by default; HTTPS uses port 443 by default.
The working process of HTTPS
When talking about HTTPS, we must first mention encryption algorithms, which are divided into two categories: symmetric encryption and asymmetric encryption.
Symmetric encryption: the same key is used for both encryption and decryption. The advantage is speed, while the disadvantage is low security. Common symmetric encryption algorithms include DES, AES, etc.
Asymmetric encryption: asymmetric encryption consists of a pair of keys, namely public key and private key. Generally speaking, the private key is held by oneself, and the public key can be shared with others. The advantage is that its security is higher than symmetric encryption, while the disadvantage is that the data transmission efficiency is lower than symmetric encryption. Information encrypted with the public key can only be decrypted with the corresponding private key. Common asymmetric encryption methods include RSA, etc.
In practical use cases, symmetric and asymmetric encryption are generally used in combination. Asymmetric encryption is used to complete key transmission, and symmetric keys are used for data encryption and decryption. The combination of both ensures security while improving data transmission efficiency.
The specific process of HTTPS is as follows:
-
The client (usually the browser) first sends a request for encrypted communication to the server.
-
Supported protocol versions, such as TLS 1.0.
-
A random number generated by the client, random1, which will be used to generate the “session key” later.
-
Supported encryption methods, such as RSA public key encryption.
-
Supported compression methods.
The server receives the request and responds.
-
Confirms the version of the encryption communication protocol used, such as TLS 1.0. If the versions supported by the browser and server do not match, the server will close the encrypted communication.
-
A random number generated by the server, random2, which will be used to generate the “session key” later.
-
Confirms the encryption method used, such as RSA public key encryption.
-
Server certificate.
The client validates the certificate after receiving it.
-
First verifies the security of the certificate.
-
After successful validation, the client generates a random number pre-master secret, encrypts it with the public key in the certificate, and sends it to the server.
After the server receives the content encrypted with the public key, it uses the private key to decrypt it to obtain the random number pre-master secret. It then generates a symmetric encryption key using random1, random2, and pre-master secret through a certain algorithm for subsequent interactions. The client will also use random1, random2, and pre-master secret with the same algorithm to generate the symmetric key.
Subsequent interactions will use the symmetric key generated in the previous step to encrypt and decrypt the transmitted content.
TCP Three-Way Handshake Process
Android Interview Questions
What are the methods of inter-process communication?
AIDL, Broadcast, File, Socket, Pipe
What are the differences between static and dynamic broadcast registration?
-
Dynamic registration broadcasts are not persistent broadcasts, meaning they follow the lifecycle of the Activity. Be sure to remove the broadcast receiver before the Activity ends. Static registration is persistent, meaning that when the application is closed, if a broadcast message comes in, the system will automatically call the application to run.
-
When the broadcast is an ordered broadcast: the higher priority ones are received first (regardless of static or dynamic). Among receivers of the same priority, dynamic takes precedence over static.
-
Among receivers of the same priority, static: the first scanned takes precedence over the later scanned; dynamic: the first registered takes precedence over the later registered.
-
When the broadcast is a default broadcast: ignoring priority, dynamic broadcast receivers take precedence over static broadcast receivers. Among receivers of the same priority, static: the first scanned takes precedence over the later scanned; dynamic: the first registered takes precedence over the later registered.
Android Performance Optimization Tool Usage (This question is recommended in conjunction with performance optimization in Android)
Common performance optimization tools in Android include: Android Profiler provided by Android Studio, LeakCanary, BlockCanary.
Android Profiler is actually very useful; it can detect three aspects of performance issues: CPU, MEMORY, NETWORK.
LeakCanary is a third-party library for detecting memory leaks. After integrating it into our project, LeakCanary will automatically detect memory leaks during the application runtime and output them to us.
BlockCanary is also a third-party library for detecting UI stuttering; after integration, Block will automatically detect UI stuttering during the application’s runtime and output it to us.
Class Loaders in Android
PathClassLoader can only load APKs that have been installed on the system; DexClassLoader can load jar/apk/dex files and can load uninstalled APKs from the SD card.
What types of animations are there in Android, and what are their characteristics and differences?
Animations in Android can generally be divided into three types: Frame Animation, Tween Animation (View Animation), and Property Animation (Object Animation).
-
Frame Animation: Configured through XML with a set of images, played dynamically. It is rarely used.
-
Tween Animation (View Animation): Generally divided into four types of operations: rotation, transparency, scaling, and translation. It is rarely used.
-
Property Animation (Object Animation): Property animation is the most commonly used type of animation today, and it is more powerful than tween animation. Property animation can be roughly divided into two types of usage: ViewPropertyAnimator and ObjectAnimator. The former is suitable for some general animations, such as rotation, translation, scaling, and transparency, and can be easily obtained through View.animate(). The latter is suitable for adding animations to our custom controls. Of course, we should first add the corresponding getter and setter methods for the properties in the custom View. It is important to note that after changing the properties in the setter method, we need to call invalidate() to refresh the drawing of the View. Then we call ObjectAnimator.ofPropertyType() to return an ObjectAnimator and call the start() method to start the animation.
The differences between tween animation and property animation:
-
Tween animation is the parent container continuously drawing the view, appearing to have moved, but the view has not changed and remains in place.
-
Property animation truly changes the view by continuously changing the internal property values of the view.
Handler Mechanism
When talking about Handler, we must mention several classes closely related to it: Message, MessageQueue, Looper.
-
Message. Two member variables in Message are worth noting: target and callback. Target is actually the Handler object sending the message, and callback is the Runnable type task passed in when calling handler.post(runnable). The essence of posting an event is to create a Message and assign the runnable passed in to the callback member variable of the created Message.
-
MessageQueue. The message queue obviously stores messages, and the next() method in MessageQueue is worth noting; it returns the next message to be processed.
-
Looper. The Looper message polling mechanism is actually the core that connects Handler and the message queue. First, we all know that to create a Handler in a thread, we must first create a Looper using Looper.prepare(), and then call Looper.loop() to start polling. Let’s take a closer look at these two methods.
prepare(). This method does two things: first, it gets the current thread’s Looper through ThreadLocal.get(). If it is not null, it throws a RuntimeException, meaning that a thread cannot create two Loopers. If it is null, it proceeds to the next step. The second step creates a Looper and binds the created Looper to the current thread by calling ThreadLocal.set(looper). It is worth mentioning that the creation of the message queue actually occurs in the constructor of Looper.
loop(). This method starts the polling of the entire event mechanism. Its essence is to start an infinite loop, continuously obtaining messages through the next() method of MessageQueue. After obtaining the message, it calls msg.target.dispatchMessage() to process it. In fact, when we talked about Message, we mentioned that msg.target is the handler sending this message. This line of code essentially calls the handler’s dispatchMessage().
-
Handler. After laying this groundwork, we finally arrive at the most important part. The analysis of Handler focuses on two parts: sending messages and processing messages.
Sending messages. In fact, sending messages includes not only sendMessage but also sendMessageDelayed, post, and postDelayed, among others. However, their essence all calls sendMessageAtTime. In sendMessageAtTime, enqueueMessage is called. In enqueueMessage, two things are done: it binds the message to the current handler via msg.target = this, and then it enqueues the message through queue.enqueueMessage.
Processing messages. The core of message processing is the dispatchMessage() method. The logic inside this method is quite simple: it first checks if msg.callback is null. If it is not null, it executes this runnable. If it is null, it executes our handleMessage method.
Android Performance Optimization
In my opinion, performance optimization in Android can be divided into several aspects: memory optimization, layout optimization, network optimization, and installation package optimization.
Memory optimization: The next question is.
Layout optimization: The essence of layout optimization is to reduce the hierarchy of Views. Common layout optimization solutions are as follows:
-
When both LinearLayout and RelativeLayout can complete the layout, prefer RelativeLayout to reduce the hierarchy of Views.
-
Extract commonly used layout components using the < include > tag.
-
Use the < ViewStub > tag to load infrequently used layouts.
-
Use the < Merge > tag to reduce the nesting levels of layouts.
Network optimization: Common network optimization solutions are as follows:
-
Minimize network requests; combine requests whenever possible.
-
Avoid DNS resolution, which may take hundreds of milliseconds and may also pose risks of DNS hijacking. Depending on business needs, consider increasing dynamic IP updates or switching to domain name access when IP access fails.
-
Load large amounts of data using pagination.
-
Use GZIP compression for network data transmission.
-
Add network data caching to avoid frequent network requests.
-
When uploading images, compress images when necessary.
Installation package optimization: The core of installation package optimization is to reduce the APK size. Common solutions are as follows:
-
Use obfuscation, which can reduce APK size to some extent, but the actual effect is minimal.
-
Reduce unnecessary resource files in the application, such as images; compress images as much as possible without affecting the app’s effect, which has a certain effect.
-
When using SO libraries, prioritize keeping the v7 version of the SO library and delete other versions. The reason is that in 2018, the v7 version of the SO library can meet the requirements of most devices on the market. While some older devices may not be compatible, we do not need to adapt to outdated devices. In actual development, reducing the APK size is very significant. For example, if you use many SO libraries, say one version of the SO library is 10M, then only keeping the v7 version and deleting the armeabi and v8 versions of the SO library can reduce the size by a total of 20M.
Memory Optimization in Android
Memory optimization in Android can be divided into two points: avoiding memory leaks and expanding memory, which is essentially about increasing income and reducing expenditure.
The essence of memory leaks is that long-lived objects reference short-lived objects.
Common causes of memory leaks include:
-
Memory leaks caused by the singleton pattern. The most common example is when creating a singleton object requires passing a Context, and an Activity-type Context is passed. Due to the static property of the singleton object, its lifecycle is from when the singleton class is loaded until the application ends. Therefore, even if the passed Activity has already finished, the singleton object still holds a reference to the Activity, leading to a memory leak. The solution is simple: do not use Activity-type Context; use Application-type Context to avoid memory leaks.
-
Memory leaks caused by static variables. Static variables are stored in the method area, and their lifecycle is from class loading to the end of the program. It can be seen that the lifecycle of static variables is very long. A common example of memory leaks caused by static variables is when we create a static variable in an Activity that requires a reference to the Activity (this). In this case, even if the Activity calls finish, it will still lead to a memory leak because the static variable’s lifecycle is almost the same as that of the entire application, and it keeps holding a reference to the Activity.
-
Memory leaks caused by non-static inner classes. The reason for memory leaks caused by non-static inner classes is that they hold a reference to the outer class. The most common examples are using Handler and Thread in Activities. Handlers and Threads created with non-static inner classes will hold the current Activity’s reference when executing delayed operations. If the Activity ends while executing the delayed operation, it will lead to a memory leak. The solutions are twofold: the first is to use static inner classes, using weak references to call the Activity. The second method is to call handler.removeCallbacksAndMessages in the onDestroy method of the Activity to cancel delayed events.
-
Memory leaks caused by not closing resources in a timely manner. Common examples include not closing various data streams promptly, not recycling Bitmaps, etc.
-
Memory leaks caused by third-party libraries that are not unbound in time. Some third-party libraries provide registration and unbinding functions. The most common is EventBus. We know that EventBus needs to be registered in onCreate and unbound in onDestroy. If not unbound, EventBus, being a singleton, will keep holding a reference to the Activity, leading to a memory leak. Similarly, RxJava requires attention to call disposable.dispose() in the onDestroy method after using the Timer operator for delayed operations.
-
Memory leaks caused by property animations. A common example is when an Activity exits during the execution of a property animation, the View object still holds a reference to the Activity, leading to a memory leak. The solution is to call the cancel method of the animation in onDestroy to cancel the property animation.
-
Memory leaks caused by WebView. WebView is special; even calling its destroy method will still lead to memory leaks. The best way to avoid memory leaks caused by WebView is to keep the Activity containing the WebView in another process. When this Activity ends, the process where the WebView resides can be killed. I remember that Alibaba DingTalk’s WebView is in another process, which should also use this method to avoid memory leaks.
Expanding memory: Why should we expand our memory? Sometimes, we inevitably have to use many third-party commercial SDKs during actual development. These SDKs can vary in quality; large companies’ SDKs may have fewer memory leaks, but some smaller companies’ SDKs might not be reliable. In such cases where we cannot change the situation, the best way is to expand memory.
Expanding memory can generally be done in two ways: one is to add the attribute largeHeap=”true” under the Application tag in the manifest file, and the other is to start multiple processes for the same application to expand the total memory space of the application. The second method is very common. For example, I have used the GeTui SDK, where the GeTui service is actually in a separate process.
In summary, memory optimization in Android is about increasing income and reducing expenditure: increasing income means expanding memory, and reducing expenditure means avoiding memory leaks.
Binder Mechanism
In Linux, to avoid interference from one process to other processes, processes are independent of each other. Within a process, there are user space and kernel space. This isolation is divided into two parts: inter-process isolation and intra-process isolation.
Since there is isolation between processes, there is also interaction. Inter-process communication is IPC, while communication between user space and kernel space is system calls.
To ensure independence and security, processes cannot directly access each other. Android is based on Linux, so it also needs to solve inter-process communication issues.
In fact, there are many ways for inter-process communication in Linux, such as pipes, sockets, etc. The reason why Android uses Binder instead of existing Linux methods is mainly due to two considerations: performance and security.
Performance: Mobile devices have strict performance requirements. Traditional inter-process communication methods in Linux, such as pipes and sockets, require copying data twice, while Binder only needs to copy once. Thus, Binder outperforms traditional inter-process communication.
Security: Traditional Linux inter-process communication does not include identity verification of both parties, which can lead to security issues. The Binder mechanism comes with identity verification, thus effectively improving security.
Binder is based on a client-server architecture and has four main components.
-
Client. The client process.
-
Server. The server process.
-
ServiceManager. Provides registration, querying, and returning proxy service objects.
-
Binder Driver. Responsible for establishing Binder connections between processes and handling low-level operations such as data exchange between processes.
The main process of the Binder mechanism is as follows:
-
The server registers our service in ServiceManager through the Binder driver.
-
The client queries the registered service in ServiceManager through the Binder driver.
-
ServiceManager returns the proxy object of the server to the client through the Binder driver.
-
After the client obtains the proxy object of the server, it can perform inter-process communication.
Principle of LruCache
The core principle of LruCache is the effective utilization of LinkedHashMap. It internally contains a member variable of type LinkedHashMap. Four methods are worth noting: constructor, get, put, and trimToSize.
Constructor: In the constructor of LruCache, two things are done: setting maxSize and creating a LinkedHashMap. It is worth noting that LruCache sets the accessOrder of LinkedHashMap to true. AccessOrder means the output order when traversing this LinkedHashMap. True means output in the order of access, and false means output in the order of addition. Since it is usually output in the order of addition, the accessOrder property is set to false by default, but our LruCache needs to output in the order of access, so we explicitly set accessOrder to true.
get method: Essentially, it calls the get method of LinkedHashMap. Since we set accessOrder to true, each time we call the get method, we put the currently accessed element at the end of this LinkedHashMap.
put method: Essentially, it also calls the put method of LinkedHashMap. Due to the characteristics of LinkedHashMap, each time we call the put method, the newly added element is also placed at the end of the LinkedHashMap. After adding, it calls the trimToSize method to ensure that the memory after adding does not exceed maxSize.
trimToSize method: The trimToSize method actually opens a while(true) infinite loop, continuously deleting elements from the head of LinkedHashMap until the memory after deletion is less than maxSize, then using break to exit the loop.
In summary, why is this algorithm called the Least Recently Used algorithm? The reason is simple: each put or get can be seen as an access. Because of the characteristics of LinkedHashMap, each accessed element is placed at the end. When our memory reaches the threshold, it triggers the trimToSize method to delete the elements at the head of LinkedHashMap until the current memory is less than maxSize. Why delete the elements at the head? The reason is obvious: the elements we have accessed recently will be placed at the end, so the elements at the head must be the Least Recently Used elements. Therefore, when memory is insufficient, these elements should be prioritized for deletion.
DiskLruCache Principle
Design an Asynchronous Image Loading Framework
Designing an image loading framework must involve the concept of a three-level cache: memory cache, local cache, and network cache.
Memory cache: Caches Bitmap in memory, which runs fast but has a small memory capacity. Local cache: Caches images in files, which is slower but has a larger capacity. Network cache: Retrieves images from the network, speed is affected by network conditions.
If we design an image loading framework, the process must be as follows:
-
After obtaining the image URL, first look for the Bitmap in memory; if found, load it directly.
-
If not found in memory, look for it in local cache; if found, load it directly.
-
If not found in memory and local cache, download the image from the network; after downloading, load the image and cache it in both memory and local cache.
Above are some basic concepts. If it comes to specific code implementation, it would generally require several files:
-
First, we need to determine our memory cache, which is generally LruCache.
-
Determine local cache, usually DiskLruCache. It is important to note that the file name for image caching is usually the MD5 hashed string of the URL to avoid exposing the URL directly.
-
After determining the memory and local caches, we create a new class MemoryAndDiskCache, of course, the name can be arbitrary. This class contains the previously mentioned LruCache and DiskLruCache. In this MemoryAndDiskCache class, we define two methods: one is getBitmap, and the other is putBitmap, corresponding to image retrieval and caching. The internal logic is also very simple. In getBitmap, we retrieve Bitmap based on the priority of memory and local cache; in putBitmap, we first cache in memory, then cache in local.
-
After determining the caching strategy class, we create an ImageLoader class, which must contain two methods: one is displayImage(url, imageView), and the other is downloadImage(url, imageView). In the displayImage method, we first bind the URL and imageView by calling imageView.setTag(url) to avoid image misalignment due to ImageView reuse when loading network images in a list. Then we retrieve the cache from MemoryAndDiskCache; if it exists, we load it directly; if not, we call the method to retrieve the image from the network. There are many ways to retrieve images from the network; here I generally use OkHttp + Retrofit. After obtaining the image from the network, we first check if imageView.getTag() matches the image URL; if it matches, we load the image; if not, we do not load the image. This way, we can avoid the misalignment of asynchronously loaded images in the list. After obtaining the image, we also cache it through MemoryAndDiskCache.
How does event distribution work in Android?
When our finger touches the screen, the event is actually passed through Activity -> ViewGroup -> View, reaching the View that responds to our touch event.
When it comes to event distribution, we must mention the following methods: dispatchTouchEvent(), onInterceptTouchEvent(), onTouchEvent. Next, let’s roughly describe the event distribution mechanism according to the flow of Activity -> ViewGroup -> View.
When our finger touches the screen, an Action_Down type event is triggered. The current page’s Activity will respond first, meaning it will enter the dispatchTouchEvent() method of the Activity. The logic in this method is roughly as follows:
-
Call getWindow().superDispatchTouchEvent().
-
If the previous step returns true, return true directly; otherwise, return its own onTouchEvent(). This logic is easy to understand: if getWindow().superDispatchTouchEvent() returns true, it indicates that the current event has been processed and does not need to call its own onTouchEvent; otherwise, it means the event has not been processed, and the Activity needs to handle it by calling its own onTouchEvent.
The getWindow() method returns a Window type object. As we all know, PhoneWindow is the only implementation class of Window in Android. Therefore, this essentially calls the superDispatchTouchEvent() of PhoneWindow.
In the superDispatchTouchEvent() of PhoneWindow, it actually calls mDecor.superDispatchTouchEvent(event). This mDecor is the DecorView, which is a subclass of FrameLayout. In the superDispatchTouchEvent() of DecorView, it calls super.dispatchTouchEvent(). It is obvious that the event has been passed from Activity to ViewGroup. Next, let’s analyze the event handling methods in ViewGroup.
In ViewGroup’s dispatchTouchEvent(), the logic is roughly as follows:
-
Use onInterceptTouchEvent() to determine whether the current ViewGroup intercepts the event; by default, ViewGroups do not intercept.
-
If it intercepts, return its own onTouchEvent(); otherwise, determine based on child.dispatchTouchEvent()’s return value. If it returns true, return true; otherwise, return its own onTouchEvent(), achieving the upward transmission of unprocessed events.
Usually, ViewGroup’s onInterceptTouchEvent() returns false, meaning it does not intercept. It is important to note the sequence of events, such as Down events, Move events, and Up events. From Down to Up is a complete event sequence, corresponding to the finger pressing down to lifting up. If ViewGroup intercepts the Down event, the subsequent events will all be handled by this ViewGroup’s onTouchEvent. If ViewGroup does not intercept the Down event, it will send an Action_Cancel event to the View that handled the Down event previously, notifying the child View that this sequence of subsequent events has been taken over by the ViewGroup, allowing the child View to restore its previous state.
Here’s a common example: in a RecyclerView with many Buttons, we first press a button and then slide a distance before releasing. At this point, the RecyclerView will scroll, and the button’s click event will not be triggered. In this example, when we press the button, it receives the Action_Down event. Normally, the subsequent sequence of events should be handled by this button. However, after sliding a distance, the RecyclerView detects that this is a sliding operation and intercepts the event sequence, executing its own onTouchEvent method, resulting in the list scrolling on the screen. Meanwhile, the button remains pressed, so when intercepting, it needs to send an Action_Cancel to notify the button to restore its previous state.
Event distribution ultimately reaches the dispatchTouchEvent() of View. In dispatchTouchEvent() of View, there are no onInterceptTouchEvent() methods, which is easy to understand since View is not a ViewGroup and does not contain other child Views, so there is no concept of intercepting. Ignoring some details, the dispatchTouchEvent() of View directly returns its own onTouchEvent(). If onTouchEvent() returns true, it indicates that the event has been processed; otherwise, unprocessed events will be transmitted upwards until a View processes the event or until it reaches Activity’s onTouchEvent, terminating the chain.
People often ask about the differences between onTouch and onTouchEvent. First, both methods are in dispatchTouchEvent() of View, and the logic is as follows:
-
If touchListener is not null, and this View is enabled, and onTouch returns true, it will directly return true, and the onTouchEvent() method will not be called.
-
If any of the above conditions are not met, it will proceed to the onTouchEvent() method. Therefore, the order of onTouch is before onTouchEvent.
View Drawing Process
The starting point of view drawing is in the performTraversals() method of ViewRootImpl. This method sequentially calls mView.measure(), mView.layout(), and mView.draw().
The view drawing process is divided into three steps: Measure, Layout, and Draw, corresponding to the three methods measure, layout, and draw.
Measuring phase. The measure method will be called by the parent View. In the measure method, some optimizations and preparations are done before calling the onMeasure method for actual self-measurement. The onMeasure method does different things in View and ViewGroup:
-
View. The onMeasure method in View calculates its own size and saves it through setMeasureDimension.
-
ViewGroup. The onMeasure method in ViewGroup calls the measure method of all child Views for self-measurement and saves the results. Then it calculates its own size based on the sizes and positions of the child Views and saves it.
Layout phase. The layout method will be called by the parent View. The layout method saves the size and position passed by the parent View and calls onLayout for actual internal layout. The onLayout method does different things in View and ViewGroup:
-
View. Since View has no child Views, it does nothing in onLayout.
-
ViewGroup. The onLayout method in ViewGroup calls the layout method of all child Views, passing the size and position to them to complete their internal layout.
Drawing phase. The draw method performs some scheduling work and then calls onDraw for the self-drawing of the View. The scheduling process of the draw method is roughly as follows:
-
Draw background. Corresponds to drawBackground(Canvas) method.
-
Draw content. Corresponds to onDraw(Canvas) method.
-
Draw child Views. Corresponds to dispatchDraw(Canvas) method.
-
Draw scrolling-related content and foreground. Corresponds to onDrawForeground(Canvas).
Common Design Patterns in Android Source Code and Design Patterns Commonly Used in My Development
How do Android and JS interact?
In Android, the interaction between Android and JS can be divided into two aspects: Android calling methods in JS and JS calling methods in Android.
Android calls JS. There are two methods for Android to call JS:
-
WebView.loadUrl(“javascript:methodNameInJS”). The advantage of this method is its simplicity, while the disadvantage is that it has no return value. If a return value from the JS method is needed, JS must call an Android method to obtain that return value.
-
WebView.evaluateJavaScript(“javascript:methodNameInJS”, ValueCallback). This method is better than loadUrl in that it can retrieve the return value from the JS method through the ValueCallback callback. The disadvantage is that this method is only available from Android 4.4, which has poor compatibility. However, as of 2018, most apps require a minimum version of 4.4, so I believe this compatibility issue is not significant.
JS calls Android. There are three methods for JS to call Android:
-
WebView.addJavascriptInterface(). This is the official solution for JS to call Android methods. It is important to note that the Android method intended for JS to call must be annotated with @JavascriptInterface to avoid security vulnerabilities. The drawback of this solution is that there were security vulnerabilities in versions before Android 4.2, but this has been fixed in 4.2 and later. Similarly, in 2018, the compatibility issue is not significant.
-
Override the shouldOverrideUrlLoading() method of WebViewClient to intercept URLs, parse the URLs, and call Android methods if they meet the established criteria. The advantage is that it avoids the security vulnerabilities of Android versions before 4.2, but the obvious disadvantage is that it cannot directly obtain the return value of the Android method; it can only get the return value by having Android call a JS method.
-
Override the onJsPrompt() method of WebChromeClient. Similar to the previous method, after intercepting the URL, if it meets the established criteria, the Android method can be called. Finally, if a return value is needed, the result.confirm(“return value from Android method”) can be used to return the return value to JS. The advantage of this method is that there are no vulnerabilities or compatibility limitations, and it is also convenient to obtain return values from Android methods. It’s worth noting that besides onJsPrompt, there are also onJsAlert and onJsConfirm methods in WebChromeClient. The reason for not choosing the other two methods is that onJsAlert has no return value, and onJsConfirm only has true and false as return values, while in frontend development, the prompt method is rarely called, which is why onJsPrompt is chosen.
Hotfix Principle
Activity Startup Process
SparseArray Principle
SparseArray is typically used in Android to replace HashMap as a data structure. Specifically, it replaces HashMap where the key is of Integer type and the value is of Object type. It’s important to note that SparseArray only implements the Cloneable interface, so it cannot be declared as a Map. Internally, SparseArray consists of two arrays: an int[] type mKeys array for storing all keys; and an Object[] type mValues array for storing all values. The most common comparison is between SparseArray and HashMap. Since SparseArray consists of two arrays, it occupies less memory than HashMap. We know that operations such as adding, deleting, modifying, and querying require finding the corresponding key-value pair, and SparseArray uses binary search for addressing, which is less efficient than the constant time complexity of HashMap. Speaking of binary search, it’s important to note that binary search requires the array to be sorted, and indeed, SparseArray is sorted in ascending order based on keys. In summary, SparseArray occupies less space than HashMap but is less efficient, representing a typical trade-off of time for space, suitable for smaller capacity storage. From a source code perspective, I think it is important to note the remove(), put(), and gc() methods of SparseArray.
-
remove(). The remove() method of SparseArray does not directly delete and then compress the array; it sets the value to DELETE, a static property of SparseArray, which is actually an Object object, and sets the mGarbage property of SparseArray to true. This property is used to call its gc() method at an appropriate time to compress the array and avoid wasting space. This improves efficiency; if the key being added in the future equals the deleted key, the new value will overwrite DELETE.
-
gc(). The gc() method in SparseArray has nothing to do with JVM GC. The gc() method internally opens a while loop, continuously moving key-value pairs that are not DELETE forward to cover those with DELETE, thus compressing the array while setting mGarbage to false to avoid memory waste.
-
put(). The put method works as follows: if the key is found in the mKeys array through binary search, the value is directly overwritten. If not found, it retrieves the index of the closest key in the array. If the value at this index is DELETE, it directly overwrites DELETE with the new value, avoiding moving array elements, thus improving efficiency. If the value is not DELETE, it checks mGarbage; if true, it calls gc() to compress the array, then finds the suitable index to move the elements after the index and insert the new key-value pair, which may trigger array expansion.
How to Avoid OOM When Loading Large Images
We know that the size of a Bitmap in memory is calculated using the formula: height in pixels * width in pixels * memory per pixel. To avoid OOM, there are two methods: proportionally reduce the height and width, and reduce the memory per pixel.
-
Proportionally reduce height and width. We know that Bitmap is created through BitmapFactory’s factory methods, such as decodeFile(), decodeStream(), decodeByteArray(), decodeResource(). All these methods have an Options parameter, which is an internal class of BitmapFactory that stores information about the Bitmap. One of the attributes of Options is inSampleSize. By modifying inSampleSize, we can reduce the height and width of the image, thus reducing the memory occupied by the Bitmap. It’s important to note that this inSampleSize must be a power of 2; if it is less than 1, the code will forcefully set inSampleSize to 1.
-
Reduce the memory per pixel. The Options class also has a property inPreferredConfig, which defaults to ARGB_8888, representing the size per pixel. We can modify it to RGB_565 or ARGB_4444 to reduce memory usage by half.
Loading Large Images
When loading high-definition large images, such as the Qingming Scroll, the screen cannot display everything at once, and considering memory situations, it is impossible to load everything into memory at once. This is where partial loading comes in. In Android, there is a class responsible for partial loading: BitmapRegionDecoder. The usage is simple; create the object using BitmapRegionDecoder.newInstance(), then call decodeRegion(Rect rect, BitmapFactory.Options options). The first parameter rect is the area to be displayed, and the second parameter is Options from BitmapFactory.
Source Code Analysis of Third-Party Libraries
Due to the large length of source code analysis, I will only provide my source code analysis links (Jianshu).
OkHttp
OkHttp Source Code Analysis
Retrofit
Retrofit Source Code Analysis 1, Retrofit Source Code Analysis 2, Retrofit Source Code Analysis 3
RxJava
RxJava Source Code Analysis
Glide
Glide Source Code Analysis
EventBus
EventBus Source Code Analysis
The general process is as follows: register:
-
Get the Class object of the subscriber.
-
Use reflection to find the collection of event handling methods in the subscriber.
-
Iterate through the collection of event handling methods, calling subscribe(subscriber, subscriberMethod) in the subscribe method. Inside subscribe:
-
If event inheritance is true, iterate through the Map type stickEvents, using isAssignableFrom to check if the current event is a superclass of the iterated event. If so, send the event.
-
If event inheritance is false, get the event from stickyEvents.get(eventType) and send it.
-
If the event type collection is empty, create a new collection to delay initialization.
-
After obtaining the event type collection, add the new event type to the collection.
-
If the Subscription collection is empty, create a new collection to delay initialization.
-
After obtaining the Subscription collection, iterate through it, comparing event handling priorities, and add the new Subscription object in the appropriate position.
-
Get the event type being handled through subscriberMethod.
-
Bind the subscriber and method into a Subscription object.
-
Obtain the Subscription collection through subscriptionsByEventType.get(eventType).
-
Obtain the event type collection through typesBySubscriber.get(subscriber).
-
Check if the current event type is sticky.
-
If the current event type is not sticky (sticky event), the subscribe process ends here.
-
If it is sticky, check the event inheritance property in EventBus, which defaults to true.
post:
-
postSticky
-
Add the event to the stickyEvents Map collection.
-
Call post method.
-
post
-
If event inheritance is true, find all parent types of the current event and call postSingleEventForEventType to send the event.
-
If event inheritance is false, only send the current event type.
-
In postToSubscription, there are four situations.
-
POSTING: call invokeSubscriber(subscription, event) to handle the event, which is essentially method.invoke().
-
MAIN: if in the main thread, directly invoke invokeSubscriber to handle; otherwise, switch to the main thread using a handler to invoke invokeSubscriber to handle the event.
-
BACKGROUND: if not in the main thread, directly invoke invokeSubscriber to handle; otherwise, start a new thread to invoke invokeSubscriber to handle in that thread.
-
ASYNC: start a new thread and invoke invokeSubscriber to handle in that thread.
-
In postSingleEventForEventType, retrieve the Subscription type collection through subscriptionsByEventType.get(eventClass).
-
Iterate through this collection and call postToSubscription to send the event.
-
Add the event to the current thread’s event queue.
-
Continuously retrieve events from the event queue in a while loop and call postSingleEvent to send the event.
-
In postSingleEvent, check the event inheritance, which defaults to true.
unregister:
-
Delete all subscriptions related to the subscriber from subscriptionsByEventType.
-
Delete all types related to the subscriber from typesBySubscriber.
Data Structures and Algorithms
Implement Quick Sort
Implement Merge Sort
Implement Heap and Heap Sort
Explain the differences between sorting algorithms (time complexity and space complexity)
1. The Startup Process of Activity (do not answer lifecycle)
http://blog.csdn.net/luoshengyang/article/details/6689748
1.2. The Startup Modes of Activity and Their Use Cases
(1) Manifest settings, (2) startActivity flag http://blog.csdn.net/CodeEmperor/article/details/50481726 This extends to: the difference between stack (First In Last Out) and queue (First In First Out)
3. The Two Ways to Start a Service
(1) startService(), (2) bindService() http://www.jianshu.com/p/2fb6eb14fdec
4. Differences Between Broadcast Registration Methods
(1) Static registration (manifest), (2) dynamic registration http://www.jianshu.com/p/ea5e233d9f43 This extends to: when to use dynamic registration
5. The Differences Between HttpClient and HttpUrlConnection
http://blog.csdn.net/guolin_blog/article/details/12452307 This extends to: which request method is used in Volley (HttpClient before 2.3, HttpUrlConnection after 2.3)
6. The Differences Between HTTP and HTTPS
http://blog.csdn.net/whatday/article/details/38147103 This extends to: the implementation principle of HTTPS
7. Handwritten Algorithms (Selection and Bubble Sort Must Know)
http://www.jianshu.com/p/ae97c3ceea8d
8. Process Survival (Non-Dying Process)
http://www.jianshu.com/p/63aafe3c12af This extends to: what is the priority of processes (the following article discusses it) https://segmentfault.com/a/1190000006251859
9. Methods of Inter-Process Communication
(1) AIDL, (2) Broadcast, (3) Messenger AIDL: https://www.jianshu.com/p/a8e43ad5d7d2 https://www.jianshu.com/p/0cca211df63c Messenger: http://blog.csdn.net/lmj623565791/article/details/47017485 This extends to: briefly describe Binder, http://blog.csdn.net/luoshengyang/article/details/6618363/
10. Loading Large Images
PS: There was a small company (the scale was exaggerated, I was deceived), that directly showed me the project and asked me to explain the implementation principle.. The most speechless interview, just one point made me almost write the code for them.. http://blog.csdn.net/lmj623565791/article/details/49300989
11. Three-Level Cache (All Major Image Frameworks Can Be Related to This)
(1) Memory Cache, (2) Local Cache, (3) Network Cache: http://blog.csdn.net/guolin_blog/article/details/9526203 Local: http://blog.csdn.net/guolin_blog/article/details/28863651
12. MVP Framework (Must Ask)
http://blog.csdn.net/lmj623565791/article/details/46596109 This extends to: Handwritten MVP example, differences with MVC, advantages of MVP
13. Explain Context
http://blog.csdn.net/lmj623565791/article/details/40481055
14. JNI
http://www.jianshu.com/p/aba734d5b5cd This extends to: where JNI is used in the project, such as core logic, key, encryption logic
15. Differences Between the Java Virtual Machine and Dalvik Virtual Machine
http://www.jianshu.com/p/923aebd31b65
16. Differences Between Thread Sleep and Wait
http://blog.csdn.net/liuzhenwen/article/details/4202967
17. View and ViewGroup Event Distribution
http://blog.csdn.net/guolin_blog/article/details/9097463 http://blog.csdn.net/guolin_blog/article/details/9153747
18. Saving Activity State
onSaveInstanceState() http://blog.csdn.net/yuzhiboyi/article/details/7677026
19. Interaction Between WebView and JS (APIs Called)
http://blog.csdn.net/cappuccinolau/article/details/8262821/
20. Memory Leak Detection and Memory Performance Optimization
http://blog.csdn.net/guolin_blog/article/details/42229627
21. Layout Optimization
http://blog.csdn.net/guolin_blog/article/details/43376527
22. Custom Views and Animations
The following two explanations are very thorough. Interviewers usually do not ask too deeply in this part; either they give you an effect and ask you to explain the principle. (1) http://www.gcssloop.com/customview/CustomViewIndex (2) http://blog.csdn.net/yanbober/article/details/50577855
Summary
What challenges were solved at work, what projects were accomplished (this question will definitely be asked, so be prepared)
These questions really rely on daily accumulation. For me, the most fulfilling project was developing the KCommon project, which greatly improved my development efficiency.
Author: BlackFlag https://www.jianshu.com/p/564b3920697a
Read more
Aside from programmers, you should learn these as well!
Summary of technical highlights, let’s talk about what I accomplished in the first half of the year
One trick to create a sliding top visual effect
NDK project practice – high imitation of 360 mobile assistant uninstall monitoring
If you think it’s good, forwarding is a great support for me!
What you gain here is not just technology!